Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-best-practices-modern-web-applications
Packt
22 Apr 2014
9 min read
Save for later

Best Practices for Modern Web Applications

Packt
22 Apr 2014
9 min read
(For more resources related to this topic, see here.) The importance of search engine optimization Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time. Item 1 – using keywords effectively In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to: Come up with a set of keywords that are pertinent to your topic Research common search keywords related to your website Take an intersection of these two sets of keywords and preemptively use them across the website Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead. However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website. Item 2 – header tags are powerful Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings. Item 3 – make sure to have alternative attributes for images Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website: Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code: <img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" /> This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results. Item 4 – enforcing clean URLs While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users. The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL: http://www.example.com/post/24/golden-dragon-review The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users. While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers. Item 5 – backlink whenever safe and possible Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result. Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host. Item 6 – handling HTTP status codes properly Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do: Status Code Alias Effect on SEO 200 Success This loads the page and the content is contributed to SEO 301 Permanent redirect This redirects the page and the redirected content is contributed to SEO 302 Temporary redirect This redirects the page and the redirected content doesn't contribute to SEO 404 Client error (not found) This loads the page and the content does not contribute to SEO 500 Server error This will not load the page and there is no content to contribute to SEO In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking. When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page. Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank. Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content. The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test. Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify. Item 7 – making use of your robots.txt and site map files Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories. For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website: User-agent: * Disallow: /music/ While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes: <?xml version="1.0" encoding="utf-8"?> <urlset > <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset> The attributes mentioned in the preceding code are explained in the following table: Attribute Meaning loc This stands for the URL location to be crawled lastmod This indicates the date on which the web page was last modified changefreq This indicates the page is modified and the number of times the crawler should visit as a result priority This indicates the web page's priority in comparison to the other web pages Using Grunt to reinforce SEO practices With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely. Some examples of Grunt plugins that accomplish this need are: grunt-html-snapshots (https://www.npmjs.org/package/grunt-html-snapshots) grunt-ajax-seo (https://www.npmjs.org/package/grunt-ajax-seo)
Read more
  • 0
  • 0
  • 1448

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 1799

article-image-building-customizable-content-management-system
Packt
07 Apr 2014
15 min read
Save for later

Building a Customizable Content Management System

Packt
07 Apr 2014
15 min read
(For more resources related to this topic, see here.) Mission briefing This article deals with the creation of a Content Management System. This system will consist of two parts: A backend that helps to manage content, page parts, and page structure A frontend that displays the settings and content we just entered We will start this by creating an admin area and then create page parts with types. Page parts, which are like widgets, are fragments of content that can be moved around the page. Page parts also have types; for example, we can display videos in our left column or display news. So, the same content can be represented in multiple ways. For example, news can be a separate page as well as a page part if it needs to be displayed on the front page. These parts need to be enabled for the frontend. If enabled, then the frontend makes a call on the page part ID and renders it in the part where it is supposed to be displayed. We will do a frontend markup in Haml and Sass. The following screenshot shows what we aim to do in this article: Why is it awesome? Everyone loves to get a CMS built from scratch that is meant to suit their needs really closely. We will try to build a system that is extremely simple as well as covers several different types of content. This system is also meant to be extensible, and we will lay the foundation stone for a highly configurable CMS. We will also spice up our proceedings in this article by using MongoDB instead of a relational database such as MySQL. At the end of this article, we will be able to build a skeleton for a very dynamic CMS. Your Hotshot objectives While building this application, we will have to go through the following tasks: Creating a separate admin area Creating a CMS with the ability of handling different types of content pages Managing page parts Creating a Haml- and Sass-based template Generating the content and pages Implementing asset caching Mission checklist We need to install the following software on the system before we start with our mission: Ruby 1.9.3 / Ruby 2.0.0 Rails 4.0.0 MongoDB Bootstrap 3.0 Haml Sass Devise Git A tool for mockups jQuery ImageMagick and RMagick Memcached Creating a separate admin area We have used devise for all our projects and we will be using the same strategy in this article. The only difference is that we will use it to log in to the admin account and manage the site's data. This needs to be done when we navigate to the URL/admin. We will do this by creating a namespace and routing our controller through the namespace. We will use our default application layout and assets for the admin area, whereas we will create a different set of layout and assets altogether for our frontend. Also, before starting with this first step, create an admin role using CanCan and rolify and associate it with the user model. We are going to use memcached for caching, hence we need to add it to our development stack. We will do this by installing it through our favorite package manager, for example, apt on Ubuntu: sudo apt-get install memcached Prepare for lift off In order to start working on this article, we will have to first add the mongoid gem to Gemfile: Gemfile gem 'mongoid'4', github: 'mongoid/mongoid' Bundle the application and run the mongoid generator: rails g mongoid:config You can edit config/mongoid.yml to suit your local system's settings as shown in the following code: config/mongoid.yml development: database: helioscms_development hosts: - localhost:27017 options: test: sessions: default: database: helioscms_test hosts: - localhost:27017 options: read: primary max_retries: 1 retry_interval: 0 We did this because ActiveRecord is the default Object Relationship Mapper (ORM). We will override it with the mongoid Object Document Mapper (ODM) in our application. Mongoid's configuration file is slightly different from the database.yml file for ActiveRecord. The session's rule in mongoid.yml opens a session from the Rails application to MongoDB. It will keep the session open as long as the server is up. It will also open the connection automatically if the server is down and it restarts after some time. Also, as a part of the installation, we need to add Haml to Gemfile and bundle it: Gemfile gem 'haml' gem "haml-rails" Engage thrusters Let's get cracking to create our admin area now: We will first generate our dashboard controller: rails g controller dashboard indexcreate app/controllers/dashboard_controller.rbroute get "dashboard/index"invoke erbcreate app/views/dashboardcreate app/views/dashboard/index.html.erbinvoke test_unitcreate test/controllers/dashboard_controller_test.rbinvoke helpercreate app/helpers/dashboard_helper.rbinvoke test_unitcreate test/helpers/dashboard_helper_test.rbinvoke assetsinvoke coffeecreate app/assets/javascripts/dashboard.js.coffeeinvoke scsscreate app/assets/stylesheets/dashboard.css.scss We will then create a namespace called admin in our routes.rb file: config/routes.rbnamespace :admin doget '', to: 'dashboard#index', as: '/'end We have also modified our dashboard route such that it is set as the root page in the admin namespace. Our dashboard controller will not work anymore now. In order for it to work, we will have to create a folder called admin inside our controllers and modify our DashboardController to Admin::DashboardController. This is to match the admin namespace we created in the routes.rb file: app/controllers/admin/dashboard_controller.rbclass Admin::DashboardController < ApplicationControllerbefore_filter :authenticate_user!def indexendend In order to make the login specific to the admin dashboard, we will copy our devise/sessions_controller.rb file to the controllers/admin path and edit it. We will add the admin namespace and allow only the admin role to log in: app/controllers/admin/sessions_controller.rbclass Admin::SessionsController < ::Devise::SessionsControllerdef createuser = User.find_by_email(params[:email])if user && user.authenticate(params[:password]) &&user.has_role? "admin"session[:user_id] = user.idredirect_to admin_url, notice: "Logged in!"elseflash.now.alert = "Email or password is invalid /Only Admin is allowed "endendend redirect_to admin_url, notice: "Logged in!" else flash.now.alert = "Email or password is invalid / Only Admin is allowed " end end end Objective complete – mini debriefing In the preceding task, after setting up devise and CanCan in our application, we went ahead and created a namespace for the admin. In Rails, the namespace is a concept used to separate a set of controllers into a completely different functionality. In our case, we used this to separate out the login for the admin dashboard and a dashboard page as soon as the login happens. We did this by first creating the admin folder in our controllers. We then copied our Devise sessions controller into the admin folder. For Rails to identify the namespace, we need to add it before the controller name as follows: class Admin::SessionsController < ::Devise::SessionsController In our route, we defined a namespace to read the controllers under the admin folder: namespace :admin doend We then created a controller to handle dashboards and placed it within the admin namespace: namnamespace :admin doget '', to: 'dashboard#index', as: '/'end We made the dashboard the root page after login. The route generated from the preceding definition is localhost:3000/admin. We ensured that if someone tries to log in by clicking on the admin dashboard URL, our application checks whether the user has a role of admin or not. In order to do so, we used has_role from rolify along with user.authenticate from devise: if user && user.authenticate(params[:password]) && user.has_role? "admin" This will make devise function as part of the admin dashboard. If a user tries to log in, they will be presented with the devise login page as shown in the following screenshot: After logging in successfully, the user is redirected to the link for the admin dashboard: Creating a CMS with the ability to create different types of pages A website has a variety of types of pages, and each page serves a different purpose. Some are limited to contact details, while some contain detailed information about the team. Each of these pages has a title and body. Also, there will be subpages within each navigation; for example, the About page can have Team, Company, and Careers as subpages. Hence, we need to create a parent-child self-referential association. So, pages will be associated with themselves and be treated as parent and child. Engage thrusters In the following steps, we will create page management for our application. This will be the backbone of our application. Create a model, view, and controller for page. We will have a very simple page structure for now. We will create a page with title, body, and page type: app/models/page.rbclass Pageinclude Mongoid::Documentfield :title, type: Stringfield :body, type: Stringfield :page_type, type: Stringvalidates :title, :presence => truevalidates :body, :presence => truePAGE_TYPE= %w(Home News Video Contact Team Careers)end We need a home page for our main site. So, in order to set a home page, we will have to assign it the type home. However, we need two things from the home page: it should be the root of our main site and the layout should be different from the admin. In order to do this, we will start by creating an action called home_page in pages_controller: app/models/page.rb scope :home, ->where(page_type: "Home")} app/controllers/pages_controller.rb def home_page @page = Page.home.first rescue nil render :layout => 'page_layout' end We will find a page with the home type and render a custom layout called page_layout, which is different from our application layout. We will do the same for the show action as well, as we are only going to use show to display the pages in the frontend: app/controllers/pages_controller.rbdef showrender :layout => 'page_layout'end Now, in order to effectively manage the content, we need an editor. This will make things easier as the user will be able to style the content easily using it. We will use ckeditor in order to style the content in our application: Gemfilegem "ckeditor", :github => "galetahub/ckeditor"gem 'carrierwave', :github => "jnicklas/carrierwave"gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid'gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' Add the ckeditor gem to Gemfile and run bundle install: helioscms$ rails generate ckeditor:install --orm=mongoid--backend=carrierwavecreate config/initializers/ckeditor.rbroute mount Ckeditor::Engine => '/ckeditor'create app/models/ckeditor/asset.rbcreate app/models/ckeditor/picture.rbcreate app/models/ckeditor/attachment_file.rbcreate app/uploaders/ckeditor_attachment_file_uploader.rb This will generate a carrierwave uploader for CKEditor, which is compatible with mongoid. In order to finish the configuration, we need to add a line to application.js to load the ckeditor JavaScript: app/assets/application.js//= require ckeditor/init We will display the editor in the body as that's what we need to style: views/pages/_form.html.haml.field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} We also need to mount the ckeditor in our routes.rb file: config/routes.rbmount Ckeditor::Engine => '/ckeditor' The editor toolbar and text area will be generated as seen in the following screenshot: In order to display the content on the index page in a formatted manner, we will add the html_safe escape method to our body: views/pages/index.html.haml%td= page.body.html_safe The following screenshot shows the index page after the preceding step: At this point, we can manage the content using pages. However, in order to add nesting, we will have to create a parent-child structure for our pages. In order to do so, we will have to first generate a model to define this relationship: helioscms$ rails g model page_relationship Inside the page_relationship model, we will define a two-way association with the page model: app/models/page_relationship.rbclass PageRelationshipinclude Mongoid::Documentfield :parent_idd, type: Integerfield :child_id, type: Integerbelongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page"end In our page model, we will add inverse association. This is to check for both parent and child and span the tree both ways: has_many :child_page, :class_name => 'Page',:inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page',:inverse_of => :child_page We can now add a page to the form as a parent. Also, this method will create a tree structure and a parent-child relationship between the two pages: app/views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control").field= f.label :body%br/= f.cktext_area :body, :rows => 20, :ckeditor =>{:uiColor => "#AADC6E", :toolbar => "mini"}%br/.actions= f.submit :class=>"btn btn-default"=link_to 'Cancel', pages_path, :class=>"btn btn-danger" We can see the the drop-down list with names of existing pages, as shown in the following screenshot: Finally, we will display the parent page: views/pages/_form.html.haml.field= f.label "Parent"%br/= f.collection_select(:parent_page_id, Page.all, :id,:title, :class => "form-control") In order to display the parent, we will call it using the association we created: app/views/pages/index.html.haml- @pages.each do |page|%tr%td= page.title%td= page.body.html_safe%td= page.parent_page.title if page.parent_page Objective complete – mini debriefing Mongoid is an ODM that provides an ActiveRecord type interface to access and use MongoDB. MongoDB is a document-oriented database, which follows a no-schema and dynamic-querying approach. In order to include Mongoid, we need to make sure we have the following module included in our model: include Mongoid::Document Mongoid does not rely on migrations such as ActiveRecord because we do not need to create tables but documents. It also comes with a very different set of datatypes. It does not have a datatype called text; it relies on the string datatype for all such interactions. Some of the different datatypes are as follows: Regular expressions: This can be used as a query string, and matching strings are returned as a result Numbers: This includes integer, big integer, and float Arrays: MongoDB allows the storage of arrays and hashes in a document field Embedded documents: This has the same datatype as the parent document We also used Haml as our markup language for our views. The main goal of Haml is to provide a clean and readable markup. Not only that, Haml significantly reduces the effort of templating due to its approach. In this task, we created a page model and a controller. We added a field called page_type to our page. In order to set a home page, we created a scope to find the documents with the page type home: scope :home, ->where(page_type: "Home")} We then called this scope in our controller, and we also set a specific layout to our show page and home page. This is to separate the layout of our admin and pages. The website structure can contain multiple levels of nesting, which means we could have a page structure like the following: About Us | Team | Careers | Work Culture | Job Openings In the preceding structure, we were dealing with a page model to generate different pages. However, our CMS should know that About Us has a child page called Careers and in turn has another child page called Work Culture. In order to create a parent-child structure, we need to create a self-referential association. In order to achieve this, we created a new model that holds a reference on the same model page. We first created an association in the page model with itself. The line inverse_of allows us to trace back in case we need to span our tree according to the parent or child: has_many :child_page, :class_name => 'Page', :inverse_of => :parent_pagebelongs_to :parent_page, :class_name => 'Page', :inverse_of =>:child_page We created a page relationship to handle this relationship in order to map the parent ID and child ID. Again, we mapped it to the class page: belongs_to :parent, :class_name => "Page"belongs_to :child, :class_name => "Page" This allowed us to directly find parent and child pages using associations. In order to manage the content of the page, we added CKEditor, which provides a feature rich toolbar to format the content of the page. We used the CKEditor gem and generated the configuration, including carrierwave. For carrierwave to work with mongoid, we need to add dependencies to Gemfile: gem 'carrierwave', :github => "jnicklas/carrierwave" gem 'carrierwave-mongoid', :require => 'carrierwave/mongoid' gem 'mongoid-grid_fs', github: 'ahoward/mongoid-grid_fs' MongoDB comes with its own filesystem called GridFs. When we extend carrierwave, we have an option of using a filesystem and GridFs, but the gem is required nonetheless. carrierwave and CKEditor are used to insert and manage pictures in the content wherever required. We then added a route to mount the CKEditor as an engine in our routes file. Finally, we called it in a form: = f.cktext_area :body, :rows => 20, :ckeditor => {:uiColor =>"#AADC6E", :toolbar => "mini"} CKEditor generates and saves the content as HTML. Rails sanitizes HTML by default and hence our HTML is safe to be saved. The admin page to manage the content of pages looks like the following screenshot:
Read more
  • 0
  • 0
  • 1081
Banner background image

article-image-organizing-jade-projects
Packt
24 Mar 2014
9 min read
Save for later

Organizing Jade Projects

Packt
24 Mar 2014
9 min read
(For more resources related to this topic, see here.) Now that you know how to use all the things that Jade can do, here's when you should use them. Jade is pretty flexible when it comes to organizing projects; the language itself doesn't impose much structure on your project. However, there are some conventions you should follow, as they will typically make your code easier to manage. This article will cover those conventions and best practices. General best practices Most of the good practices that are used when writing HTML carry over to Jade. Some of these include the following: Using a consistent naming convention for ID's, class names, and (in this case) mixin names and variables Adding alt text to images Choosing appropriate tags to describe content and page structure The list goes on, but these are all things you should already be familiar with. So now we're going to discuss some practices that are more Jade-specific. Keeping logic out of templates When working with a templating language, like Jade, that allows you to use advanced logical operations, separation of concerns (SoC) becomes an important practice. In this context, SoC is the separation of business and presentational logic, allowing each part to be developed and updated independently. An easy point to draw the border between business and presentation is where data is passed to the template. Business logic is kept in the main code of your application and passes the data to be presented (as well-formed JSON objects) to your template engine. From there, the presentation layer takes the data and performs whatever logic is needed to make that data into a readable web page. An additional advantage of this separation is that the JSON data can be passed to a template over stdio (to the server-side Jade compiler), or it can be passed over TCP/IP (to be evaluated client side). Since the template only formats the given data, it doesn't matter where it is rendered, and can be used on both server and client. For documenting the format of the JSON data, try JSON Schema (http://json-schema.org/). In addition to describing the interface between that your presentation layer uses, it can be used in tests to validate the structure of the JSON that your business layer produces. Inlining When writing HTML, it is commonly advised that you don't use inline styles or scripts because it is harder to maintain. This advice still applies to the way you write your Jade. For everything but the smallest one-page projects, tests, and mockups, you should separate your styles and scripts into different files. These files may then be compiled separately and linked to your HTML with style or link tags. Or, you could include them directly into the Jade. But either way, the important part is that you keep it separated from your markup in your source code. However, in your compiled HTML you don't need to worry about keeping inlined styles out. The advice about avoiding inline styles applies only to your source code and is purely for making your codebase easier to manage. In fact, according to Best Practices for Speeding Up Your Web Site (http://developer.yahoo.com/performance/rules.html) it is much better to combine your files to minimize HTTP requests, so inlining at compile time is a really good idea. It's also worth noting that, even though Jade can help you inline scripts and styles during compilation, there are better ways to perform these compile-time optimizations. For example, build-tools like AssetGraph (https://github.com/assetgraph/assetgraph) can do all the inlining, minifying, and combining you need, without you needing to put code to do so in your templates. Minification We can pass arguments through filters to compilers for things like minifying. This feature is useful for small projects for which you might not want to set up a full build-tool. Also, minification does reduce the size of your assets making it a very easy way to speed up your site. However, your markup shouldn't really concern itself with details like how the site is minified, so filter arguments aren't the best solution for minifying. Just like inlining, it is much better to do this with a tool like AssetGraph. That way your markup is free of "build instructions". Removing style-induced redundancy A lot of redundant markup is added just to make styling easier: we have wrappers for every conceivable part of the page, empty divs and spans, and plenty of other forms of useless markup. The best way to deal with this stuff is to improve your CSS so it isn't reliant on wrappers and the like. Failing that, we can still use mixins to take that redundancy out of the main part of our code and hide it away until we have better CSS to deal with it. For example, consider the following example that uses a repetitive navigation bar: input#home_nav(type='radio', name='nav', value='home', checked) label(for='home_nav') a(href='#home') home input#blog_nav(type='radio', name='nav', value='blog') label(for='blog_nav') a(href='#blog') blog input#portfolio_nav(type='radio', name='nav', value='portfolio') label(for='portfolio_nav') a(href='#portfolio') portfolio //- ...and so on Instead of using the preceding code, it can be refactored into a reusable mixin as shown in the following code snippet: mixin navbar(pages) - checked = true for page in pages input( type='radio', name='nav', value=page, id="#{page}_nav", checked=checked) label(for="#{page}_nav") a(href="##{page}") #{page} - checked = false The preceding mixin can be then called later in your markup using the following code: +navbar(['home', 'blog', 'portfolio']) Semantic divisions Sometimes, even though there is no redundancy present, dividing templates into separated mixins and blocks can be a good idea. Not only does it provide encapsulation (which makes debugging easier), but the division represents a logical separation of the different parts of a page. A common example of this would be dividing a page between the header, footer, sidebar, and main content. These could be combined into one monolithic file, but putting each in a separate block represents their separation, can make the project easier to navigate, and allows each to be extended individually. Server-side versus client-side rendering Since Jade can be used on both the client-side and server-side, we can choose to do the rendering of the templates off the server. However, there are costs and benefits associated with each approach, so the decision must be made depending on the project. Client-side rendering Using the Single Page Application (SPA) design, we can do everything but the compilation of the basic HTML structure on the client-side. This allows for a static page that loads content from a dynamic backend and passes that content to Jade templates compiled for client-side usage. For example, we could have simple webapp that, once loaded, fires off a AJAX request to a server running WordPress with a simple JSON API, and displays the posts it gets by passing the the JSON to templates. The benefits of this design is that the page itself is static (and therefore easily cacheable), with the SPA design, navigation is much faster (especially if content is preloaded), and significantly less data is transferred because of the terse JSON format that the content is formatted in (rather than it being already wrapped in HTML). Also, we get a very clean separation of content and presentation by actually forcing content to be moved into a CMS and out of the codebase. Finally, we avoid the risk of coupling the rendering too tightly with the CMS by forcing all content to be passed over HTTP in JSON—in fact, they are so separated that they don't even need to be on the same server. But, there are some issues too—the reliance on JavaScript for loading content means that users who don't have JS enabled will not be able to load content normally and search engines will not be able to see your content without implementing _escaped_fragment_ URLs. Thus, some fallback is needed, whether it is a full site that is able to function without JS or just simple HTML snapshots rendered using a headless browser, it is a source of additional work. Server-side rendering We can, of course, render everything on the server-side and just send regular HTML to the browser. This is the most backwards compatible, since the site will behave just as any static HTML site would, but we don't get any of the benefits of client-side rendering either. We could still use some client-side Jade for enhancements, but the idea is the same: the majority gets rendered on the server-side and full HTML pages need to be sent when the user navigates to a new page. Build systems Although the Jade compiler is fully capable of compiling projects on its own, in practice, it is often better to use a build system because they can make interfacing with the compiler easier. In addition, build systems often help automate other tasks such as minification, compiling other languages, and even deployment. Some examples of these build systems are Roots (http://roots.cx/), Grunt (http://gruntjs.com/), and even GNU's Make (http://www.gnu.org/software/make/). For example, Roots can recompile Jade automatically each time you save it and even refresh an in-browser preview of that page. Continuous recompilation helps you notice errors sooner and Roots helps you avoid the hassle of manually running a command to recompile. Summary In this article, we just finished taking a look at some of the best practices to follow when organizing Jade projects. Also, we looked at the use of third-party tools to automate tasks. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] RSS Web Widget [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 1411

article-image-article-phone-calls-send-sms-your-website-using-twilio
Packt
21 Mar 2014
9 min read
Save for later

Make phone calls and send SMS messages from your website using Twilio

Packt
21 Mar 2014
9 min read
(For more resources related to this topic, see here.) Sending a message from a website Sending messages from a website has many uses; sending notifications to users is one good example. In this example, we're going to present you with a form where you can enter a phone number and message and send it to your user. This can be quickly adapted for other uses. Getting ready The complete source code for this recipe can be found in the Chapter6/Recipe1/ folder. How to do it... Ok, let's learn how to send an SMS message from a website. The user will be prompted to fill out a form that will send the SMS message to the phone number entered in the form. Download the Twilio Helper Library from https://github.com/twilio/twilio-php/zipball/master and unzip it. Upload the Services/ folder to your website. Upload config.php to your website and make sure the following variables are set: <?php $accountsid = ''; // YOUR TWILIO ACCOUNT SID $authtoken = ''; // YOUR TWILIO AUTH TOKEN $fromNumber = ''; // PHONE NUMBER CALLS WILL COME FROM ?> Upload a file called sms.php and add the following code to it: <!DOCTYPE html> <html> <head> <title>Recipe 1 – Chapter 6</title> </head> <body> <?php include('Services/Twilio.php'); include("config.php"); include("functions.php"); $client = new Services_Twilio($accountsid, $authtoken); if( isset($_POST['number']) && isset($_POST['message']) ){ $sid = send_sms($_POST['number'],$_POST['message']); echo "Message sent to {$_POST['number']}"; } ?> <form method="post"> <input type="text" name="number" placeholder="Phone Number...." /><br /> <input type="text" name="message" placeholder="Message...." /><br /> <button type="submit">Send Message</button> </form> </body> </html> Create a file called functions.php and add the following code to it: <?php function send_sms($number,$message){ global $client,$fromNumber; $sms = $client->account->sms_messages->create( $fromNumber, $number, $message ); return $sms->sid; } How it works... In steps 1 and 2, we downloaded and installed the Twilio Helper Library for PHP. This library is the heart of your Twilio-powered apps. In step 3, we uploaded config.php that contains our authentication information to talk to Twilio's API. In steps 4 and 5, we created sms.php and functions.php, which will send a message to the phone number we enter. The send_sms function is handy for initiating SMS conversations; we'll be building on this function heavily in the rest of the article. Allowing users to make calls from their call logs We're going to give your user a place to view their call log. We will display a list of incoming calls and give them the option to call back on these numbers. Getting ready The complete source code for this recipe can be found in the Chapter9/Recipe4 folder. How to do it... Now, let's build a section for our users to log in to using the following steps: Update a file called index.php with the following content: <?php session_start(); include 'Services/Twilio.php'; require("system/jolt.php"); require("system/pdo.class.php"); require("system/functions.php"); $_GET['route'] = isset($_GET['route']) ? '/'.$_GET['route'] : '/'; $app = new Jolt('site',false); $app->option('source', 'config.ini'); #$pdo = Db::singleton(); $mysiteURL = $app->option('site.url'); $app->condition('signed_in', function () use ($app) { $app->redirect( $app->getBaseUri().'/login',!$app->store('user')); }); $app->get('/login', function() use ($app){ $app->render( 'login', array(),'layout' ); }); $app->post('/login', function() use ($app){ $sql = "SELECT * FROM `user` WHERE `email`='{$_POST['user']}' AND `password`='{$_POST['pass']}'"; $pdo = Db::singleton(); $res = $pdo->query( $sql ); $user = $res->fetch(); if( isset($user['ID']) ){ $_SESSION['uid'] = $user['ID']; $app->store('user',$user['ID']); $app->redirect( $app->getBaseUri().'/home'); }else{ $app->redirect( $app->getBaseUri().'/login'); } }); $app->get('/signup', function() use ($app){ $app->render( 'register', array(),'layout' ); }); $app->post('/signup', function() use ($app){ $client = new Services_Twilio($app->store('twilio.accountsid'), $app->store('twilio.authtoken') ); extract($_POST); $timestamp = strtotime( $timestamp ); $subaccount = $client->accounts->create(array( "FriendlyName" => $email )); $sid = $subaccount->sid; $token = $subaccount->auth_token; $sql = "INSERT INTO 'user' SET `name`='{$name}',`email`='{$email }',`password`='{$password}',`phone_number`='{$phone_number}',`sid` ='{$sid}',`token`='{$token}',`status`=1"; $pdo = Db::singleton(); $pdo->exec($sql); $uid = $pdo->lastInsertId(); $app->store('user',$uid ); // log user in $app->redirect( $app->getBaseUri().'/phone-number'); }); $app->get('/phone-number', function() use ($app){ $app->condition('signed_in'); $user = $app->store('user'); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('phone-number'); }); $app->post("search", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $SearchParams = array(); $SearchParams['InPostalCode'] = !empty($_POST['postal_code']) ? trim($_POST['postal_code']) : ''; $SearchParams['NearNumber'] = !empty($_POST['near_number']) ? trim($_POST['near_number']) : ''; $SearchParams['Contains'] = !empty($_POST['contains'])? trim($_ POST['contains']) : '' ; try { $numbers = $client->account->available_phone_numbers->getList('US', 'Local', $SearchParams); if(empty($numbers)) { $err = urlencode("We didn't find any phone numbers by that search"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } } catch (Exception $e) { $err = urlencode("Error processing search: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $app->render('search',array('numbers'=>$numbers)); }); $app->post("buy", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $PhoneNumber = $_POST['PhoneNumber']; try { $number = $client->account->incoming_phone_numbers->create(array( 'PhoneNumber' => $PhoneNumber )); $phsid = $number->sid; if ( !empty($phsid) ){ $sql = "INSERT INTO numbers (user_id,number,sid) VALUES('{$u ser['ID']}','{$PhoneNumber}','{$phsid}');"; $pdo = Db::singleton(); $pdo->exec($sql); $fid = $pdo->lastInsertId(); $ret = editNumber($phsid,array( "FriendlyName"=>$PhoneNumber, "VoiceUrl" => $mysiteURL."/voice?id=".$fid, "VoiceMethod" => "POST", ),$user['sid'], $user['token']); } } catch (Exception $e) { $err = urlencode("Error purchasing number: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $msg = urlencode("Thank you for purchasing $PhoneNumber"); header("Location: index.php?msg=$msg"); $app->redirect( $app->getBaseUri().'/home?msg='.$msg); exit(0); }); $app->route('/voice', function() use ($app){ }); $app->get('/transcribe', function() use ($app){ }); $app->get('/logout', function() use ($app){ $app->store('user',0); $app->redirect( $app->getBaseUri().'/login'); }); $app->get('/home', function() use ($app){ $app->condition('signed_in'); $uid = $app->store('user'); $user = get_user( $uid ); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('dashboard',array( 'user'=>$user, 'client'=>$client )); }); $app->get('/delete', function() use ($app){ $app->condition('signed_in'); }); $app->get('/', function() use ($app){ $app->render( 'home' ); }); $app->listen(); Upload a file called dashboard.php with the following content to your views folder: <h2>My Number</h2> <?php $pdo = Db::singleton(); $sql = "SELECT * FROM `numbers` WHERE `user_ id`='{$user['ID']}'"; $res = $pdo->query( $sql ); while( $row = $res->fetch() ){ echo preg_replace("/[^0-9]/", "", $row['number']); } try { ?> <h2>My Call History</h2> <p>Here are a list of recent calls, you can click any number to call them back, we will call your registered phone number and then the caller</p> <table width=100% class="table table-hover tabled-striped"> <thead> <tr> <th>From</th> <th>To</th> <th>Start Date</th> <th>End Date</th> <th>Duration</th> </tr> </thead> <tbody> <?php foreach ($client->account->calls as $call) { # echo "<p>Call from $call->from to $call->to at $call->start_time of length $call->duration</p>"; if( !stristr($call->direction,'inbound') ) continue; $type = find_in_list($call->from); ?> <tr> <td><a href="<?=$uri?>/call?number=<?=urlencode($call->from)?>"><?=$call->from?></a></td> <td><?=$call->to?></td> <td><?=$call->start_time?></td> <td><?=$call->end_time?></td> <td><?=$call->duration?></td> </tr> <?php } ?> </tbody> </table> <?php } catch (Exception $e) { echo 'Error: ' . $e->getMessage(); } ?> <hr /> <a href="<?=$uri?>/delete" onclick="return confirm('Are you sure you wish to close your account?');">Delete My Account</a> How it works... In step 1, we updated the index.php file. In step 2, we uploaded dashboard.php to the views folder. This file checks if we're logged in using the $app->condition('signed_in') method, which we discussed earlier, and if we are, it displays all incoming calls we've had to our account. We can then push a button to call one of those numbers and whitelist or blacklist them. Summary Thus in this article we have learned how to send messages and make phone calls from your website using Twilio. Resources for Article: Further resources on this subject: Make phone calls, send SMS from your website using Twilio [article] Trunks in FreePBX 2.5 [article] Trunks using 3CX: Part 1 [article]
Read more
  • 0
  • 0
  • 1582

article-image-how-expand-your-knowledge
Packt
17 Feb 2014
11 min read
Save for later

How to Expand your Knowledge

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) One of the most frequently asked questions on help forums probably is, "How can I learn Google Apps Script?" The answer is almost always the same: learn JavaScript and follow the numerous tutorials available on the Internet. No doubt, it is one of the possible ways to learn but it is also one of the most difficult ways. I shall express my opinion on that subject at the end of this article, but let us first summarize what we really need to be able to use Google Apps Script efficiently. The first and most important thing we must have is a clear idea of what we want to achieve. This seems a bit silly because we think, "Oh well, of course I know what I want; I just don't know how to do it!" As a matter of fact, this is often not the case. Let us have an example: a colleague asked me recently how he could count the time he was spending at school for meetings and other administrative tasks, not taking into account his hours as a teacher. This was supposed to be a simple problem as everyone in our school has a personal calendar in which all the events that we are invited to are recorded. So, he began to search for a way to collect every possible event from his calendar to a spreadsheet and from there—since he can definitely use a spreadsheet—he intended to do some data filtering to get the result he wanted. I told him to have a look at the Google Apps Script documentation and see what tools he had, to pick up data from calendars and import them into a spreadsheet. A few days later, he came back to me complaining that he didn't find any appropriate methods to do what he needs to. And, in a way, he was right; nowhere is such a workflow explained and it is actually not surprising. One can't imagine compiling all the possible workflows into a single help resource; there are definitely too many different use cases, each of them needing a particular approach. We had a discussion where I told him to think about his research as a series of simple and accurate parts and steps before trying to get the whole process in one stroke. The following is what he told me another few days later: "I knew nothing about this macro language, so I discovered that it is based on JavaScript with the addition of Google's own services that use a similar syntax and that the whole thing is composed of functions calling each other and having parameters. Then, I examined the calendar service and saw that it needs so-called date objects to choose a start and end date. Date object methods are pretty well explained on Mozilla's page, so once I got that I had an array of events, I thought what the heck is an array of objects? You gave me the link to this w3schools site, so I took a look at their definition; that was enough for me to go further and discover that I could use a loop to handle each event separately. Google documentation shows all the methods to get events details; that part was easy and now I have all my calendar events with dates, hours, description, title. All of it! I tell you." I'm not going to transcribe all of our conversation—it finally took a couple of hours—but towards the end, he was describing the process so well that the actual writing of his script was almost just a formality. With the help of the Content assist (autocomplete) feature of the script editor and a couple of browser tabs left open on JavaScript and Google documentation, he managed to write his script in one day. Of course, the script was not perfect and by no way optimized his speed or gave  nice-looking results, but it worked and he had the data he was looking for. At that point, he could post his script on a help forum if something went wrong or try to improve another version if he's a perfectionist, but that depends only on his will to go further or not. I would simply say one thing: you will learn what you need. If you don't need it, don't try to learn it as you will forget it faster than you learned it. If you do, then be prepared to need something else right after; it is an endless journey! JavaScript versus Google Apps Script The following is stated on the overview of Google Apps Script documentation page: Google Apps Script is a scripting language based on JavaScript that lets you do new and cool things with Google Apps like Docs, Sheets, and Forms. They should use a bigger typeface to make it more visible! The keyword here is based on JavaScript because it does indeed use most of the JavaScript Version 1.6 (with some portions of Version 1.7 and Version 1.8) and its general syntax. But, it has so many other methods that knowing only JavaScript is clearly not sufficient to use it adequately. I would even say that you can learn it step-by-step when you need it, looking for information on a specific item each time you use a new type of object. The following is the code that was used to get the integer part of the result that uses the getTime method: function myAgeInHours(){  var myBirthDate = new Date('1958/02/19 02:00:00').getTime();  myBirthDate = parseInt(myBirthDate/3600000, 10);  var today = parseInt(new Date().getTime()/3600000, 10);  return today-myBirthDate;} We looked at the documentation about the Date object to find the getTime() method and then found parseInt to get the integer part of the result. Well, I'm convinced that this approach is more efficient than spending hours on a site or in a book that shows all JavaScript information from A to Z. We have the opportunity to have powerful search engines in our browsers, so let's use them; they always find the answer for us in less time than it takes to write the question. Concerning methods specific to Google Apps Script, I think the approach should be different. The Google API documentation is pretty well organized and is full of code examples that clearly show us how to use almost every single method. If we start a project in a spreadsheet, it is a good idea to carefully read the section about spreadsheets (https://developers.google.com/apps-script/reference/spreadsheet) at least once and just check if what it says makes any sense. For example, in the Sheet class, I found this description: Returns the range with the top left cell at the given coordinates, and with the given number of rows. The following screenshot displays the same description: If I understand what range and co-ordinates are, then I probably know enough to be able to use that method (getRange(row, column, numRows) or a similar one. You want me to tell you the truth? I didn't know we could get a range this way by simply defining the top-left cell and just the number of rows (only three parameters). I always use the next one in the list, which is shown as follows: The description says: Returns the range with the top left cell at the given coordinates with the given number of rows and columns. So after all this time I spent on dozens of spreadsheet scripts, there still are methods that I can't even imagine exist! That's actually a nice confirmation of what I was suggesting: one doesn't need to know everything to be able to use it but it's always a good idea to read the docs from time to time. Infinite resources JavaScript is a very popular language; there are thousands of websites that show us examples and explain methods and functions. We must add all the forums and Q&A sites that return many results when we search something on Google to these websites (or any other search engine), and that is actually an unforeseen difficulty. It happens quite often that we find false information or code snippets that simply don't work, either because they have typos in them or they are so badly written that they work only in a very specific and peculiar situation. My personal solution is to use only a couple of websites and perform a search on their search engine, avoiding all the sources I'm not sure of. Maybe I miss something at times, but at least the information I get is trustworthy. Last but certainly not least, the help forum recommended by the Google Apps Script team, http://stackoverflow.com/questions/tagged/google-apps-script (with the google-apps-script tag), is certainly the best resource that is available. With more than 5000 questions (as of January, 2014), the help forum probably has threads about every possible use case and an important part of it has answers as well. There are of course other interesting tags: JavaScript, Google docs, Google spreadsheets, and a few even more specific ones. I have rarely seen really bad answers—although it does happen sometimes—simply because so many people read these posts that they generally flag or comment answers that show wrong information. There are also people from Google that regularly keep an eye on it and clarify any ambiguous response. Being a newbie is, by definition, temporary When I began to use Google spreadsheets and scripts, I found the Google Group Help forum (which does not exist anymore) an invaluable source of information and help, so I asked dozens of questions—some of them very basic and naive—and always got answers. After a while, since I was spending hours on this forum reading about every post I found, I began to answer too. I was so proud of being able to answer a question! It was almost like passing an examination; I knew that one of the experts there was going to read what I wrote and evaluate my knowledge; quite stressful but also satisfying when you don't fail! So after a couple of months I gained my first level point (on the Google Group forum, there are no reputation points but levels, starting from 1 for new arriving members up to TC(Top Contributors), whose level is unknown but is generally more than 15 or 20; anyway, that's not important). That little story is just a way to encourage any beginner to spend some time on this forum and consider every question as a challenge and try to answer it. Of course, there is no need to publish your answer every time as there are chances that you may get it all wrong, but just use this as an exercise that will give you more and more expertise. From time to time, you'll be able to be the first or best answerer and gain a few reputation points; consider it as a game, just a funny game where all you can finally win is knowledge and all you can lose is your newbie status—not a bad deal after all! Try to find your own best learning method I'm certainly not pretending that I know the best learning method for anyone. All the tips I presented in the previous section did work for me—and for a few other people I know—but there is no magic formula that would suit everyone. I know that each of us has a different background and follows a different path, but I wanted to say loud and clear that you don't need to have to be a graduate in IT to begin with Google Apps Script nor do you have to spend hours learning rules and conventions. Practice will make it easier everyday and motivation will give you enough energy to complete your projects, from simple ones to more ambiguous ones. Summary This article has given an overview of the many resources available to improve your learning experience. There are certainly more that I don't know of but as I already mentioned a few times before, we have powerful search engines in our browsers to help us. We also have to keep in mind that Google Apps Script will probably be different as compared to what it will be in a couple of years. Resources for Article: Further resources on this subject: Google Apps: Surfing the Web [Article] Developing apps with the Google Speech APIs [Article] Data Modeling and Scalability in Google App [Article]
Read more
  • 0
  • 0
  • 1262
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
Packt
14 Feb 2014
6 min read
Save for later

CreateJS – Performing Animation and Transforming Function

Packt
14 Feb 2014
6 min read
(For more resources related to this topic, see here.) Creating animations with CreateJS As you may already know, creating animations in web browsers during web development is a difficult job because you have to write code that has to work in all browsers; this is called browser compatibility. The good news is that CreateJS provides modules to write and develop animations in web browsers without thinking about browser compatibility. CreateJS modules can do this job very well and all you need to do is work with CreateJS API. Understanding TweenJS TweenJS is one of the modules of CreateJS that helps you develop animations in web browsers. We will now introduce TweenJS. The TweenJS JavaScript library provides a simple but powerful tweening interface. It supports tweening of both numeric object properties and CSS style properties, and allows you to chain tweens and actions together to create complex sequences.—TweenJS API Documentation What is tweening? Let us understand precisely what tweening means: Inbetweening or tweening is the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image.—Wikipedia The same as other CreateJS subsets, TweenJS contains many functions and methods; however, we are going to work with and create examples for specific basic methods, based on which you can read the rest of the documentation of TweenJS to create more complex animations. Understanding API and methods of TweenJS In order to create animations in TweenJS, you don't have to work with a lot of methods. There are a few functions that help you to create animations. Following are all the methods with a brief description: get: It returns a new tween instance. to: It queues a tween from the current values to the target properties. set: It queues an action to set the specified properties on the specified target. wait: It queues a wait (essentially an empty tween). call: It queues an action to call the specified function. play: It queues an action to play (un-pause) the specified tween. pause: It queues an action to pause the specified tween. The following is an example of using the Tweening API: var tween = createjs.Tween.get(myTarget).to({x:300},400). set({label:"hello!"}).wait(500).to({alpha:0,visible:false},1000). call(onComplete); The previous example will create a tween, which: Tweens the target to an x value of 300 with duration 400ms and sets its label to hello!. Waits 500ms. Tweens the target's alpha property to 0with duration 1s and sets the visible property to false. Finally, calls the onComplete function. Creating a simple animation Now, it's time to create our simplest animation with TweenJS. It is a simple but powerful API, which gives you the ability to develop animations with method chaining. Scenario The animation has a red ball that comes from the top of the Canvas element and then drops down. In the preceding screenshot, you can see all the steps of our simple animation; consequently, you can predict what we need to do to prepare this animation. In our animation,we are going to use two methods: get and to. The following is the complete source code for our animation: var canvas = document.getElementById("canvas"); var stage = new createjs.Stage(canvas); var ball = new createjs.Shape(); ball.graphics.beginFill("#FF0000").drawCircle(0, 0, 50); ball.x = 200; ball.y = -50; var tween = createjs.Tween.get(ball) to({ y: 300 }, 1500, createjs.Ease.bounceOut); stage.addChild(ball); createjs.Ticker.addEventListener("tick", stage); In the second and third line of the JavaScript code snippet, two variables are declared, namely; the canvas and stage objects. In the next line, the ball variable is declared, which contains our shape object. In the following line, we drew a red circle with the drawCircle method. Then, in order to set the coordinates of our shape object outside the viewport, we set the x axis to -50 px. After this, we created a tween variable, which holds the Tween object; then, using the TweenJS method chaining, the to method is called with duration of 1500 ms and the y property set to 300 px. The third parameter of the to method represents the ease function of tween, which we set to bounceOut in this example. In the following lines, the ball variable is added to Stage and the tick event is added to the Ticker class to keep Stage updated while the animation is playing. You can also find the Canvas element in line 30, using which all animations and shapes are rendered in this element. Transforming shapes CreateJS provides some functions to transform shapes easily on Stage. Each DisplayObject has a setTransform method that allows the transforming of a DisplayObject (like a circle). The following shortcut method is used to quickly set the transform properties on the display object. All its parameters are optional. Omitted parameters will have the default value set. setTransform([x=0] [y=0] [scaleX=1] [scaleY=1] [rotation=0] [skewX=0] [skewY=0] [regX=0] [regY=0]) Furthermore, you can change all the properties via DisplayObject directly (like scaleY and scaleX) as shown in the following example: displayObject.setTransform(100, 100, 2, 2); An example of Transforming function As an instance of using the shape transforming feature with CreateJS, we are going to extend our previous example: var angle = 0; window.ball; var canvas = document.getElementById("canvas"); var stage = new createjs.Stage(canvas); ball = new createjs.Shape(); ball.graphics.beginFill("#FF0000").drawCircle(0, 0, 50); ball.x = 200; ball.y = 300; stage.addChild(ball); function tick(event) { angle += 0.025; var scale = Math.cos(angle); ball.setTransform(ball.x, ball.y, scale, scale); stage.update(event); } createjs.Ticker.addEventListener("tick", tick); In this example, we have a red circle, similar to the previous example of tweening. We set the coordinates of the circle to 200 and 300 and added the circle to the stage object. In the next line, we have a tick function that transforms the shape of the circle. Inside this function, we have an angle variable that increases with each call. We then set the ballX and ballY coordinate to the cosine of the angle variable. The transforming done is similar to the following screenshot: This is a basic example of transforming shapes in CreateJS, but obviously, you can develop and create better transforming by playing with a shape's properties and values. Summary In this article, we covered how to use animation and transform objects on the page using CreateJS. Resources for Article: Further resources on this subject: Introducing a feature of IntroJs [Article] So, what is Node.js? [Article] So, what is Ext JS? [Article]
Read more
  • 0
  • 0
  • 1369

article-image-adding-health-checks
Packt
14 Feb 2014
3 min read
Save for later

Adding health checks

Packt
14 Feb 2014
3 min read
(For more resources related to this topic, see here.) A health check is a runtime test for our application. We are going to create a health check that tests the creation of new contacts using the Jersey client. The health check results are accessible through the admin port of our application, which by default is 8081. How to do it… To add a health check perform the following steps: Create a new package called com.dwbook.phonebook.health and a class named NewContactHealthCheck in it: import javax.ws.rs.core.MediaType; import com.codahale.metrics.health.HealthCheck; import com.dwbook.phonebook.representations.Contact; import com.sun.jersey.api.client.*; public class NewContactHealthCheck extends HealthCheck { private final Client client; public NewContactHealthCheck(Client client) { super(); this.client = client; } @Override protected Result check() throws Exception { WebResource contactResource = client .resource("http://localhost:8080/contact"); ClientResponse response = contactResource.type( MediaType.APPLICATION_JSON).post( ClientResponse.class, new Contact(0, "Health Check First Name", "Health Check Last Name", "00000000")); if (response.getStatus() == 201) { return Result.healthy(); } else { return Result.unhealthy("New Contact cannot be created!"); } } } Register the health check with the Dropwizard environment by using the HealthCheckRegistry#register() method within the #run() method of the App class. You will first need to import com.dwbook.phonebook.health.NewContactHealthCheck. The HealthCheckRegistry can be accessed using the Environment#healthChecks() method: // Add health checks e.healthChecks().register ("New Contact health check", new NewContactHealthCheck(client)); After building and starting your application, navigate with your browser to http://localhost:8081/healthcheck: The results of the defined health checks are presented in the JSON format. In case the custom health check we just created or any other health check fails, it will be flagged as "healthy": false, letting you know that your application faces runtime problems. How it works… We used exactly the same code used by our client class in order to create a health check; that is, a runtime test that confirms that the new contacts can be created by performing HTTP POST requests to the appropriate endpoint of the ContactResource class. This health check gives us the required confidence that our web service is functional. All we need for the creation of a health check is a class that extends HealthCheck and implements the #check() method. In the class's constructor, we call the parent class's constructor specifying the name of our check—the one that will be used to identify our health check. In the #check() method, we literally implement a check. We check that everything is as it should be. If so, we return Result.healthy(), else we return Result.unhealthy(), indicating that something is going wrong. Summary This article showed what a health check is and demonstrated how to add a health check. The health check we created tested the creation of new contacts using the Jersey client. Resources for Article: Further resources on this subject: RESTful Web Services – Server-Sent Events (SSE) [Article] Connecting to a web service (Should know) [Article] Web Services and Forms [Article]
Read more
  • 0
  • 0
  • 2635

article-image-search-using-beautiful-soup
Packt
20 Jan 2014
6 min read
Save for later

Search Using Beautiful Soup

Packt
20 Jan 2014
6 min read
(For more resources related to this topic, see here.) Searching with find_all() The find() method was used to find the first result within a particular search criteria that we applied on a BeautifulSoup object. As the name implies, find_all() will give us all the items matching the search criteria we defined. The different filters that we see in find() can be used in the find_all() method. In fact, these filters can be used in any searching methods, such as find_parents() and find_siblings(). Let us consider an example of using find_all(). Finding all tertiary consumers We saw how to find the first and second primary consumer. If we need to find all the tertiary consumers, we can't use find(). In this case, find_all() will become handy. all_tertiaryconsumers = soup.find_all(class_="tertiaryconsumerslist") The preceding code line finds all the tags with the = "tertiaryconsumerlist" class. If given a type check on this variable, we can see that it is nothing but a list of tag objects as follows: print(type(all_tertiaryconsumers)) #output <class 'list'> We can iterate through this list to display all tertiary consumer names by using the following code: for tertiaryconsumer in all_tertiaryconsumers: print(tertiaryconsumer.div.string) #output lion tiger Understanding parameters used with find_all() Like find(), the find_all() method also has a similar set of parameters with an extra parameter, limit, as shown in the following code line: find_all(name,attrs,recursive,text,limit,**kwargs) The limit parameter is used to specify a limit to the number of results that we get. For example, from the e-mail ID sample we saw, we can use find_all() to get all the e-mail IDs. Refer to the following code: email_ids = soup.find_all(text=emailid_regexp) print(email_ids) #output [u'[email protected]',u'[email protected]',u'[email protected]'] Here, if we pass limit, it will limit the result set to the limit we impose, as shown in the following example: email_ids_limited = soup.find_all(text=emailid_regexp,limit=2) print(email_ids_limited) #output [u'[email protected]',u'[email protected]'] From the output, we can see that the result is limited to two. The find() method is find_all() with limit=1. We can pass True or False values to find the methods. If we pass True to find_all(), it will return all tags in the soup object. In the case of find(), it will be the first tag within the object. The print(soup.find_all(True)) line of code will print out all the tags associated with the soup object. In the case of searching for text, passing True will return all text within the document as follows: all_texts = soup.find_all(text=True) print(all_texts) #output [u'n', u'n', u'n', u'n', u'n', u'plants', u'n', u'100000', u'n', u'n', u'n', u'algae', u'n', u'100000', u'n', u'n', u'n', u'n', u'n', u'deer', u'n', u'1000', u'n', u'n', u'n', u'rabbit', u'n', u'2000', u'n', u'n', u'n', u'n', u'n', u'fox', u'n', u'100', u'n', u'n', u'n', u'bear', u'n', u'100', u'n', u'n', u'n', u'n', u'n', u'lion', u'n', u'80', u'n', u'n', u'n', u'tiger', u'n', u'50', u'n', u'n', u'n', u'n', u'n'] The preceding output prints every text content within the soup object including the new-line characters too. Also, in the case of text, we can pass a list of strings and find_all() will find every string defined in the list: all_texts_in_list = soup.find_all(text=["plants","algae"]) print(all_texts_in_list) #output [u'plants', u'algae'] This is same in the case of searching for the tags, attribute values of tag, custom attributes, and the CSS class. For finding all the div and li tags, we can use the following code line: div_li_tags = soup.find_all(["div","li"]) Similarly, for finding tags with the producerlist and primaryconsumerlist classes, we can use the following code lines: all_css_class = soup.find_all(class_=["producerlist","primaryconsumerlist"]) Both find() and find_all() search an object's descendants (that is, all children coming after it in the tree), their children, and so on. We can control this behavior by using the recursive parameter. If recursive = False, search happens only on an object's direct children. For example, in the following code, search happens only at direct children for div and li tags. Since the direct child of the soup object is html, the following code will give an empty list: div_li_tags = soup.find_all(["div","li"],recursive=False) print(div_li_tags) #output [] If find_all() can't find results, it will return an empty list, whereas find() returns None. Navigation using Beautiful Soup Navigation in Beautiful Soup is almost the same as the searching methods. In navigating, instead of methods, there are certain attributes that facilitate the navigation. So each Tag or NavigableString object will be a member of the resulting tree with the Beautiful Soup object placed at the top and other objects as the nodes of the tree. The following code snippet is an example for an HTML tree: html_markup = """<div class="ecopyramid"> <ul id="producers"> <li class="producerlist"> <div class="name">plants</div> <div class="number">100000</div> </li> <li class="producerlist"> <div class="name">algae</div> <div class="number">100000</div> </li> </ul> </div>""" For the previous code snippet, the following HTML tree is formed: In the previous figure, we can see that Beautiful Soup is the root of the tree, the Tag objects make up the different nodes of the tree, while NavigableString objects make up the leaves of the tree. Navigation in Beautiful Soup is intended to help us visit the nodes of this HTML/XML tree. From a particular node, it is possible to: Navigate down to the children Navigate up to the parent Navigate sideways to the siblings Navigate to the next and previous objects parsed We will be using the previous html_markup as an example to discuss the different navigations using Beautiful Soup. Summary In this article, we discussed in detail the different search methods in Beautiful Soup, namely, find(), find_all(), find_next(), and find_parents(); code examples for a scraper using search methods to get information from a website; and understanding the application of search methods in combination. We also discussed in detail the different navigation methods provided by Beautiful Soup, methods specific to navigating downwards and upwards, and sideways, to the previous and next elements of the HTML tree. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] Web Scraping with Python [article] Plotting data using Matplotlib: Part 1 [article]
Read more
  • 0
  • 0
  • 6641

article-image-crud-applications-using-laravel-4
Packt
19 Dec 2013
18 min read
Save for later

CRUD Applications using Laravel 4

Packt
19 Dec 2013
18 min read
(for more resources related to this topic, see here.) Getting familiar with Laravel 4 Let's Begin the Journey, and install Laravel 4. Now if everything is installed correctly you will be greeted by this beautiful screen, as shown in the following screenshot, when you hit your browser with http://localhost/laravel/public or http://localhost/<installeddirectory>/public: Now that you can see we have installed Laravel correctly, you would be thinking how can I use Laravel? How do I create apps with Laravel? Or you might be wondering why and how this screen is shown to us? What's behind the scenes? How Laravel 4 sets this screen for us? So let's review that. When you visit the http://localhost/laravel/public, Laravel 4 detects that you are requesting for the default route which is "/". You would be wondering what route is this if you are not familiar with the MVC world. Let me explain that. In traditional web applications we use a URL with page name, say for example: http://www.shop.com/products.php The preceding URL will be bound to the page products.php in the web server hosting shop.com. We can assume that it displays all the products from the database. Now say for example, we want to display a category of books from all the products. You will say, "Hey, it's easy!" Just add the category ID into the URL as follows: http://www.shop.com/products.php?cat=1 Then put the filter in the page products.php that will check whether the category ID is passed. This sounds perfect, but what about pagination and other categories? Soon clients will ask you to change one of your category page layouts to change and you will hack your code more. And your application URLs will look like the following: http://www.shop.com/products.php?cat=2 http://www.shop.com/products.php?cat=3&page=1&total=20 http://www.shop.com/products.php?cat=3&page=1&total=20&layout=1 If you look at your code after six months, you would be looking at one huge products.php page with all of your business and view code mixed in one large file. You wouldn't remember those easy hacks you did in order to manage client requests. On top of that, a client or client's SEO executive might ask you why are all the URLs so badly formatted? Why are they are not human friendly? In a way they are right. Your URLs are not as pretty as the following: http://www.shop.com/products http://www.shop.com/products/books http://www.shop.com/products/cloths The preceding URLs are human friendly. Users can easily change categories themselves. In addition to that, your client's SEO executives will love you for those URLs just as a search engine likes those URLs. You might be puzzled now; how do you do that? Here my friend MVC (Model View Controller) comes into the picture. MVC frameworks are meant specifically for doing this. It's one of the core goals of using the MVC framework in web development. So let's go back to our topic "routing"; routing means decoupling your URL request and assigning it to some specific action via your controller/route. In the Laravel MVC world, you register all your routes in a route file and assign an action to them. All your routes are generally found at /app/routes.php. If you open your newly downloaded Laravel installation's routes.php file, you will notice the following code: Route::get('/', function() { return View::make('hello'); }); The preceding code registers a route with / means default URL with view /app/views/hello.php. Here view is just an .html file. Generally view files are used for managing your presentation logic. So check /app/views/hello.php, or better let's create an about page for our application ourselves. Let's register a route about by adding the following code to app/routes.php: Route::get('about', function() { return View::make('about'); }); We would need to create a view at app/views/about.php. So create the file and insert the following code in to it: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>About my little app</title> </head> <body> <h1>Hello Laravel 4!</h1> <p> Welcome to the Awesomeness! </p> </body> </html> Now head over to your browser and run http://localhost/laravel/public/about. You will be greeted with the following output: Hello Laravel 4! Welcome to the Awesomeness! Isn't it easy? You can define your route and separate the view for each type of request. Now you might be thinking what about Controllers as the term MVC has C for Controllers? And isn't it difficult to create routes and views for each action? What advantage will we have if we use the preceding pattern? Well we found that mapping URLs to a particular action in comparison to the traditional one-file-based method. Well first you are organizing your code way better as you will have actions responding to specific URLs mapped in the route file. Any developer can recognize routes and see what's going on with your code. Developers do not have to check many files to see which files are using which code. Your presentation logic is separated, so if a designer wants to change something, he will know he needs to look at the view folder of your application. Now about Controllers; they allow us to group related actions into a single class. So in a typical MVC project, there will be one user Controller that will be responsible for all user-related actions, such as registering, logging in, editing a profile, and changing the password. Generally routes are used for small applications or creating static pages quickly. Controllers provide more in-depth options to create a group of methods that belong to a specific class related to the application. Here is how we can create Controllers in Laravel 4. Open your app/routes.php file and add following code: Route::get('contact', 'Pages@contact'); The preceding code will register the http://yourapp.com/contact URL in the Pages Controller's contact method. So let's write a page's Controller. Create a file PagesController.php at /app/controllers/ in your Laravel 4 installation directory. The following are the contents of the PagesController.php file: <?php class PagesController extends BaseController { public function contact() { return View::make('hello'); } } Here BaseController is a class provided by Laravel so we can place our Controller shared logic in a common class. And it extends the framework's Controller class and provides the Controller functionality. You can check Basecontroller.php in the Controller's directory to add shared logic. Controllers versus routes So you are wondering now, "What's the difference between Controllers and routes?" Which one to use? Controllers or routes? Here are the differences between Controllers and routes: A disadvantage of routes is that you can't share code between routes, as routes work via Closure functions. And the scope of a function is bound within function. Controllers give a structure to your code. You can define your system in well-grouped classes, which are divided in such a way that it makes sense, for example, users, dashboard, products, and so on. Compared to routes, Controllers have only one disadvantage and it's that you have to create a file for each Controller; however, if you think in terms of organizing the code in a large application, it makes more sense to use Controllers.   Creating a simple CRUD application with Laravel 4 Now as we have a basic understanding of how we can create pages, let's create a simple CRUD application with Laravel 4. The application we want to create will manage the users of our application. We will create the following list of features for our application: List users (read users from the database) Create new users Edit user information Delete user information Adding pagination to the list of users Now to start off with things, we would need to set up a database. So if you have phpMyAdmin installed with your local web server setup, head over to http://localhost/phpmyadmin; if you don't have phpMyAdmin installed, use the MySQL admin tool workbench to connect with your database and create a new database. Now we need to configure Laravel 4 to connect with our database. So head over to your Laravel 4 application folder, open /app/config/database.php, change the MySQL array, and match your current database settings. Here is the MySQL database array from database.php file: 'mysql' => array( 'driver' => 'mysql', 'host' => 'localhost', 'database' => '<yourdbname>', 'username' => 'root', 'password' => '<yourmysqlpassord>', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), Now we are ready to work with the database in our application. Let's first create the database table Users via the following SQL queries from phpMyAdmin or any MySQL database admin tool; CREATE TABLE IF NOT EXISTS 'users' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'password' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'email' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'phone' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'name' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'created_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', 'updated_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', PRIMARY KEY ('id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ; Now let's seed some data into the Users table so when we fetch the users we won't get empty results. Run the following queries into your database admin tool: INSERT INTO 'users' ('id', 'username', 'password', 'email', 'phone', 'name', 'created_at', 'updated_at') VALUES (1, 'john', 'johndoe', '[email protected]', '123456', 'John', '2013-06-07 08:13:28', '2013-06-07 08:13:28'), (2, 'amy', 'amy.deg', '[email protected]', '1234567', 'amy', '2013-06-07 08:14:49', '2013-06-07 08:14:49');   Listing the users – read users from database Let's read users from the database. We would need to follow the steps described to read users from database: A route that will lead to our page A controller that will handle our method The Eloquent Model that will connect to the database A view that will display our records in the template So let's create our route at /app/routes.php. Add the following line to the routes.php file: Route::resource('users', 'UserController'); If you have noticed previously, we had Route::get for displaying our page Controller. But now we are using resource. So what's the difference? In general we face two types of requests during web projects: GET and POST. We generally use these HTTP request types to manipulate our pages, that is, you will check whether the page has any POST variables set; if not, you will display the user form to enter data. As a user submits the form, it will send a POST request as we generally define the <form method="post"> tag in our pages. Now based on page's request type, we set the code to perform actions such as inserting user data into our database or filtering records. What Laravel provides us is that we can simply tap into either a GET or POST request via routes and send it to the appropriate method. Here is an example for that: Route::get('/register', 'UserController@showUserRegistration'); Route::post('/register', 'UserController@saveUser'); See the difference here is we are registering the same URL, /register, but we are defining its GET method so Laravel can call UserController class' showUserRegistration method. If it's the POST method, Laravel should call the saveUser method of the UserController class. You might be wondering what's the benefit of it? Well six months later if you want to know how something's happening in your app, you can just check out the routes.php file and guess which Controller and which method of Controller handles the part you are interested in, developing it further or solving some bug. Even some other developer who is not used to your project will be able to understand how things work and can easily help move your project. This is because he would be able to somewhat understand the structure of your application by checking routes.php. Now imagine the routes you will need for editing, deleting, or displaying a user. Resource Controller will save you from this trouble. A single line of route will map multiple restful actions with our resource Controller. It will automatically map the following actions with HTTP verbs: HTTP VERB ACTION GET READ POST CREATE PUT UPDATE DELETE DELETE On top of that you can actually generate your Controller via a simple command-line artisan using the following command: $ php artisan Usercontroller:make users This will generate UsersController.php with all the RESTful empty methods, so you will have an empty structure to play with. Here is what we will have after the preceding command: class UserController extends BaseController { /** * Display a listing of the resource. * * @return Response */ public function index() { // } /** * Show the form for creating a new resource. * * @return Response */ public function create() { // } /** * Store a newly created resource in storage. * * @return Response */ public function store() { // } /** * Display the specified resource. * * @param int $id * @return Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return Response */ public function edit($id) { // } /** * Update the specified resource in storage. * * @param int $id * @return Response */ public function update($id) { // } /** * Remove the specified resource from storage. * * @param int $id * @return Response */ public function destroy($id) { // } } Now let's try to understand what our single line route declaration created relationship with our generated Controller. HTTP VERB Path Controller Action/method GET /Users Index GET /Users/create Create POST /Users Store GET /Users/{id} Show (individual record) GET /Users/{id}/edit Edit PUT /Users/{id} Update DELETE /Users/{id} Destroy As you can see, resource Controller really makes your work easy. You don't have to create lots of routes. Also Laravel 4's artisan-command-line generator can generate resourceful Controllers, so you will write very less boilerplate code. And you can also use the following command to view the list of all the routes in your project from the root of your project, launching command line: $ php artisan routes Now let's get back to our basic task, that is, reading users. Well now we know that we have UserController.php at /app/controller with the index method, which will be executed when somebody launches http://localhost/laravel/public/users. So let's edit the Controller file to fetch data from the database. Well as you might remember, we will need a Model to do that. But how do we define one and what's the use of Models? You might be wondering, can't we just run the queries? Well Laravel does support queries through the DB class, but Laravel also has Eloquent that gives us our table as a database object, and what's great about object is that we can play around with its methods. So let's create a Model. If you check your path /app/models/User.php, you will already have a user Model defined. It's there because Laravel provides us with some basic user authentication. Generally you can create your Model using the following code: class User extends Eloquent {} Now in your controller you can fetch the user object using the following code: $users = User::all(); $users->toarray(); Yeah! It's that simple. No database connection! No queries! Isn't it magic? It's the simplicity of Eloquent objects that many people like in Laravel. But you have the following questions, right? How does Model know which table to fetch? How does Controller know what is a user? How does the fetching of user records work? We don't have all the methods in the User class, so how did it work? Well models in Laravel use a lowercase, plural name of the class as the table name unless another name is explicitly specified. So in our case, User was converted to a lowercase user and used as a table to bind with the User class. Models are automatically loaded by Laravel, so you don't have to include the reference of the Model file. Each Model inherits an Eloquent instance that resolves methods defined in the model.php file at vendor/Laravel/framework/src/Illumininate/Database/Eloquent/ like all, insert, update, delete and our user class inherit those methods and as a result of this, we can fetch records via User::all(). So now let's try to fetch users from our database via the Eloquent object. I am updating the index method in our app/controllers/UsersController.php as it's the method responsible as per the REST convention we are using via resource Controller. public function index() { $users = User::all(); return View::make('users.index', compact('users')); } Now let's look at the View part. Before that, we need to know about Blade. Blade is a templating engine provided by Laravel. Blade has a very simple syntax, and you can determine most of the Blade expressions within your view files as they begin with @. To print anything with Blade, you can use the {{ $var }} syntax. Its PHP-equivalent syntax would be: <?php echo $var; ?> Now back to our view; first of all, we need to create a view file at /app/views/users/index.blade.php, as our statement would return the view file from users.index. We are passing a compact users array to this view. So here is our index.blade.php file: @section('main') <h1>All Users</h1> <p>{{ link_to_route('users.create', 'Add new user') }}</p> @if ($users->count()) <table class="table table-striped table-bordered"> <thead> <tr> <th>Username</th> <th>Password</th> <th>Email</th> <th>Phone</th> <th>Name</th> </tr> </thead> <tbody> @foreach ($users as $user) <tr> <td>{{ $user->username }}</td> <td>{{ $user->password }}</td> <td>{{ $user->email }}</td> <td>{{ $user->phone }}</td> <td>{{ $user->name }}</td> <td>{{ link_to_route('users.edit', 'Edit', array($user->id), array('class' => 'btn btn-info')) }}</td> <td> {{ Form::open(array('method' => 'DELETE', 'route' => array('users.destroy', $user->id))) }} {{ Form::submit('Delete', array('class' => 'btn btn-danger')) }} {{ Form::close() }} </td> </tr> @endforeach </tbody> </table> @else There are no users @endif @stop Let's see the code line by line. In the first line we are extending the user layouts via the Blade template syntax @extends. What actually happens here is that Laravel will load the layout file at /app/views/layouts/user.blade.php first. Here is our user.blade.php file's code: <!doctype html> <html> <head> <meta charset="utf-8"> <link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"> <style> table form { margin-bottom: 0; } form ul { margin-left: 0; list-style: none; } .error { color: red; font-style: italic; } body { padding-top: 20px; } </style> </head> <body> <div class="container"> @if (Session::has('message')) <div class="flash alert"> <p>{{ Session::get('message') }}</p> </div> @endif @yield('main') </div> </body> </html> Now in this file we are loading the Twitter bootstrap framework for styling our page, and via yield('main') we can load the main section from the view that is loaded. So here when we load http://localhost/laravel/public/users, Laravel will first load the users.blade.php layout view and then the main section will be loaded from index.blade.php. Now when we get back to our index.blade.php, we have the main section defined as @section('main'), which will be used by Laravel to load it into our layout file. This section will be merged into the layout file where we have put the @yield ('main') section. We are using Laravel's link_to_route method to link to our route, that is, /users/create. This helper will generate an HTML link with the correct URL. In the next step, we are looping through all the user records and displaying it simply in a tabular format. Now if you have followed everything, you will be greeted by the following screen:
Read more
  • 0
  • 0
  • 8034
article-image-code-editing
Packt
18 Dec 2013
9 min read
Save for later

Code Editing

Packt
18 Dec 2013
9 min read
(For more resources related to this topic, see here.) Discovering Search and Replace Search and Replace is one of the common actions we use in every editor, sublime text has two main search features: Single file Multiple files Before covering these topics, let's talk about the best tool available for searching text and especially, patterns, namely, Regular Expressions. Regular Expressions Regular Expressions can find complex patterns in text. To take full advantage of the Search and Replace features of Sublime, you should at least know the basics of Regular Expressions, also known as regex or regexp. Regular Expressions can be really annoying, painful, and joyful at the same time! We won't cover Regular Expressions in this article because it's an endless topic. We will only note that Sublime Text uses the Boost's Perl Syntax for Regular Expressions; this can be found at http://www.boost.org/doc/libs/1_47_0/libs/regex/doc/html/boost_regex/syntax/perl_syntax.html I recommend going to http://www.regular-expressions.info/quickstart.html if you are not familiar with Regular Expressions. Search and Replace – a single file Let's open the Search panel by pressing Ctrl + F on Windows and Linux or command + F on OS X. The search panel options can be controlled using keyboard shortcuts: Search panel Options Windows/Linux OS X Toggle Regular Expressions Alt + R command + Option + R Toggle Case Sensitivity Alt + C command + Option + C Toggle Exact Match Alt + W command + Option + W Find Next Enter Enter Find Previous Shift + Enter Shift + Enter Find All Alt + Enter Option + Enter As we can see in the following screenshot, we have the Regular Expression option turned on: Let's try Search and Replace now by pressing Ctrl + H on Windows and Linux or Option + command + F on OS X and examining the following screenshot: We can see that this time, both, the Regular Expression option and the Case Sensitivity option are turned on. Because of the Case Sensitivity option being on, line 8 isn't selected, the pattern messages/(d) doesn't match line 2 because d only matches numbers, and the 1 on the Replace with field will replace match group number 1, indicated by the parentheses around d. We can also refer to the group by using $1 instead of 1. Let's see what happens after we press Ctrl + Alt + Enter for Replace All: We can see that lines 2 and 8 still say messages and not message; that's exactly what we expected! The incremental search Incremental search is another cool feature that is here to save us keyboard clicks. We can bring up the incremental search panel by pressing Ctrl + I on Windows and Linux or command + I on OS X. The only difference between the incremental search and a regular search is the behavior of the Enter key; in incremental searches, the Enter key will select the next match and dismiss the search panel. This saves us from pressing Esc to dismiss the regular search panel. Search and Replace – multiple files Sublime Text also allows a multiple file search by pressing Ctrl + Shift + F or command + Shift + F on OS X. The same shortcuts from the single file search also apply here; the difference is that we have the Where field and a … button near it. The Where field determines where the files can be searched for; we can define the scope of the search in several ways: Adding individual directories (Unix-style paths, even on Windows( Adding/excluding files based on the wildcard pattern Adding Sublime-symbolic locations such as <open folders>, <open files> We can also combine all the filters by separating them with commas. We can do it in the following manner: /C/Users/Dan/Cool Project,*.rb,<open files> This will look in all files in C:UsersDanCool Project that ends with .rb and are currently open by Sublime. Results will be opened in a new tab called Find Results containing all found results separated by file paths, double clicking on a result will get you to the exact location of the result in the original file. Mastering Column and Multiple Selection Multiple Selections is one of Sublime's coolest features; TextMate users might be familiar with it. So how can we select multiple lines? We select one line like we usually do and selecting the second line while holding Ctrl or command on OS X. We can also subtract a line by holding the Alt key or command + Shift keys on OS X. This feature is really useful so it is recommended to play with it, the following are some shortcuts that can help us feel more comfortable with multiple selections: Multiple Selection action Windows/Linux OS X Return to Single Selection Mode Esc Esc Undo last selection motion Ctrl + U command + U Add next occurrence of selected text to selection Ctrl + D command + D Add all occurrences of selected text to selection Alt + F3 Control + command + G Turn Single Linear Selection into Block Selection Ctrl + Shift + L Shift + command + L Column Selection The Column Selection feature is one of my favorites! We can select multiple lines by pressing Shift and dragging the right mouse button on Windows or Linux and pressing Option and dragging the left mouse button on OS X. Here we want to remove the letter s from messages, as shown in the following screenshot: We have selected all s using Column selection; now we just need to hit backspace to delete them. Navigating through everything Sublime is known for its ability to quickly move between and around files and lines. Here, we are going to master how to navigate our code quickly and easily. Going To Anything We already learned how to use the Go To Anything feature, but it can do more than just searching for filenames. We can conduct a fuzzy search inside a "fuzzily found" file. Really? Yeah, we can. For example, we can type the following inside the Go To Anything window: isl#wld This will make Sublime perform a fuzzy search for wld inside the file that we found by fuzzy searching isl; it can thus find the word world inside a file named island. We can also perform a fuzzy search in the current file by pressing Ctrl + ; in Windows or Linux and command + P, # in OS X. It is very common to use fuzzy search inside HTML files because it will immediately show all the elements and classes in order to accelerate navigation. Symbol search Sometimes we want to search for a specific function or specific class inside the current file. With Sublime we can do it simply by pressing Ctrl + R on Windows or Linux and command + R on OS X. Projects Project is a group of files and folders. To save a project we just need to add folders and files to the sidebar, and then from the menu, we navigate to Project | Save Project As… The saved file is our projects data, and it is stored in a JSON formatted file with a .sublime-project extension. The following is a sample project file: {     "folders":     [         {             "path":"src",             "follow_symlinks":true         },         {             "path":"docs",             "name":"Documentation",       "file_exclude_patterns":["*.xml"]         }     ],     "settings":     {         "tab_size":6     },     "build_systems":     [         {             "name":"List",             "shell_cmd":"ls -l"         }     ] } As we can see in the preceding code, there are three elements written as JSON arrays. Folders Each folder must have a valid folder path that can be absolute or relative to the project directory, which is where the project file is. A folder can also include the following keys: name: This is the name that will be shown on the sidebar file_execlude_pattern: This is the folder that will exclude all the files matching the given Regular Expression file_include_pattern: This is the folder that will include only files matching the given Regular Expression folder_execlude_pattern: This is the folder that will exclude all subfolders matching the given Regular Expression folder_include_pattern: This is the folder that will include only subfolders matching the given Regular Expression follow_symlinks: This will include symlinks if set to true Settings The project-specific settings array will contain all the settings that we want to apply only on this project. These settings will override our user settings. Build systems In an array of build system definitions, we must specify a name for each definition; these build systems will then be specified in Tools | Build Systems. For more information about build systems, please visit http://sublimetext.info/docs/en/reference/build_systems.html. Navigating between projects To switch between projects quickly, we can press Ctrl + Alt + P in Windows or Linux and Control + command + P in OS X. Summary By now, we have learned some of Sublime's basic features to the most advanced features and techniques that need to be used while editing code. Resources for Article: Further resources on this subject: Top features you need to know about [Article] Setting up environment for Cucumber BDD Rails [Article] Implementation of SASS [Article]
Read more
  • 0
  • 0
  • 836

article-image-working-aspnet-web-api
Packt
13 Dec 2013
5 min read
Save for later

Working with ASP.NET Web API

Packt
13 Dec 2013
5 min read
(For more resources related to this topic, see here.) The ASP.NET Web API is a framework that you can make use of to build web services that use HTTP as the protocol. You can use the ASP.NET Web API to return data based on the data requested by the client, that is, you can return JSON or XML as the format of the data. Layers of an application The ASP.NET Framework runs on top of the managed environment of the .NET Framework. The Model, View, and Controller (MVC) architectural pattern is used to separate the concerns of an application to facilitate testing, ease the process of maintenance of the application's code, and to provide better support for change. The model represents the application's data and the business objects; the view is the presentation layer component, and the controller binds the model and the view together. The following figure illustrates the components of Model View Architecture: The MVC Architecture The ASP.NET Web API architecture The ASP.NET Web API is a lightweight web-based architecture that uses HTTP as the application protocol. Routing in the ASP.NET Web API works a bit differently compared to the way it works in ASP.NET MVC. The basic difference between routing in MVC and routing in a Web API is that, Web API uses the HTTP method, and not the URI path, to select the action. The Web API Framework uses a routing table to determine which action is to be invoked for a particular request. You need to specify the routing parameters in the WebApiConfig.cs file that resides in the App_Start directory. Here's an example that shows how routing is configured: routes.MapHttpRoute(    name: "Packt API Default",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); The following code snippet illustrates how routing is configured by action names: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); The ASP.NET Web API generates structured data such as JSON and XML as responses. It can route the incoming requests to the actions based on HTTP verbs and not only action names. Also, the ASP.NET Web API can be hosted outside of the ASP.NET runtime environment and the IIS Web Server context. Routing in ASP.NET Web API Routing in the ASP.NET Web API is very much the same as in the ASP.NET MVC. The ASP.NET Web API routes URLs to a controller. Then, the control is handed over to the action that corresponds to the HTTP verb of the request message. Note that the default route template for an ASP.NET Web API project is {controller}/{id}—here the {id} parameter is optional. Also, the ASP.NET Web API route templates may optionally include an {action} parameter. It should be noted that unlike the ASP.NET MVC, URLs in the ASP.NET Web API cannot contain complex types. It should also be noted that complex types must be present in the HTTP message body, and that there can be one, and only one, complex type in the HTTP message body. Note that the ASP.NET MVC and the ASP.NET Web API are two distinctly separate frameworks which adhere to some common architectural patterns. In the ASP.NET Web API framework, the controller handles all HTTP requests. The controller comprises a collection of action methods—an incoming request to the Web API framework, the request is routed to the appropriate action. Now, the framework uses a routing table to determine the action method to be invoked when a request is received. Here is an example: routes.MapHttpRoute(     name: "Packt Web API",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); Refer to the following UserController class. public class UserController <UserAuthentication>: BaseApiController<UserAuthentication> {     public void GetAllUsers() { }     public IEnumerable<User> GetUserById(int id) { }     public HttpResponseMessage DeleteUser(int id){ } } The following table illustrates the HTTP method and the corresponding URI, Actions, and so on: HTTP Method URI Action Parameter GET api/users GetAllUsers None GET api/users/1 GetUserByID 1 POST api/users     DELETE api/users/3 DeleteUser 3 The Web API Framework matches the segments in the URI path to the template. The following steps are performed: The URI is matched to a route template. The respective controller is selected. The respective action is selected. The IHttpControllerSelector.SelectController method selects the controller, takes an HttpRequestMessage instance and returns an HttpControllerDescriptor. After the controller has been selected, the Web API framework selects the action by invoking the IHttpActionSelector.SelectAction method. This method in turn accepts HttpControllerContext and returns HttpActionDescriptor. You can also explicitly specify the HTTP method for an action by decorating the action method using the HttpGet, HttpPut, HttpPost, or HttpDelete attributes. Here is an example: public class UsersController : ApiController {     [HttpGet]     public User FindUser(id) {} } You can also use the AcceptVerbs attribute to enable HTTP methods other than GET, PUT, POST, and DELETE. Here is an example: public class UsersController : ApiController {     [AcceptVerbs("GET", "HEAD")]     public User FindUser(id) { } } You can also define route by an action name. Here is an example: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); You can also override the action name by using the ActionName attribute. The following code snippet illustrates two actions: one that supports GET and the other that supports POST: public class UsersController : ApiController {     [HttpGet]     [ActionName("Token")]     public HttpResponseMessage GetToken(int userId);     [HttpPost]     [ActionName("Token")]     public void AddNewToken(int userId); }  
Read more
  • 0
  • 0
  • 7781

article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 2364
article-image-logging-capabilities
Packt
10 Dec 2013
6 min read
Save for later

Logging Capabilities

Packt
10 Dec 2013
6 min read
(For more resources related to this topic, see here.) Posting messages to the log TestComplete allows committing various types of messages to the log: ordinary messages, warnings, logs, and so on. In this section, we will consider examples of how to use these messages. Getting ready Create a file with the name myfile.txt in the root directory of C:. How to do it... In order to see examples of all the message types in the log, the following steps should be performed: Create and launch the following function: function testMessages() { Log.Event("An event", "Event additional Info"); Log.Message("A message", "Message additional Info"); Log.Warning("A warning", "Warning additional Info"); Log.Error("An error", "Error additional Info"); Log.File("C:\somefile.txt", "A file posted to the log"); Log.Link("C:\somefile.txt", "A link to a file"); Log.Link("http://smartbear.com/", "HTTP link"); Log.Link("ftp://smartbear.com/", "FTP link"); } In the result, we will get the following screenshot of the log How it works... In the given example, we have used four different types of messages. They are as follows: Log.Event: This message is an event which occurs when TestComplete interacts with a tested application. Usually, messages of this type are placed into the log at the point of text input or mouse-clicks; however, we can also place custom-made events into the log. Log.Message: This message is an ordinary message that is usually used for prompting a user concerning current actions that are being executed by the script (usually, of a higher level than that of the events; for example, creation of a user, searching for a record, and so on). Log.Warning: This message is a non-critical error. It is used in case the results of the check are different from those expected; nonetheless, execution of the script can carry on. Log.Error: This message is a critical error usually used when an error is a critical one, making any further execution of the test would be futile These four types of message are based on several parameters. The first of them is a string that we observe in the log itself; the second one contains additional information which can be seen in the Additional Info tab, if the message has been clicked on. The second parameter is optional and can be omitted as well as all other parameters. There are two more types of messages: Log.File: This message copies the assigned file into the file with the log, and places a reference-pointer to it. Meanwhile, TestComplete renames the file to avoid naming conflicts, leaving only the original extension intact. Log.Link: This message places a link to the web page or a file, without making a copy of the file itself in the folder with the log. On clicking on the link, the file will open with the help of the associated program or a link in the browser. These two types of message accept the link as the first parameter, and then the message parameters, and those pertaining to the additional information (as the previous four). Only the first parameter is mandatory. Posting screenshots to the log Sometimes, it is necessary to place an image into the log; often, it may be a window screenshot, an image of a controls element, or even that of the whole of the screen. To this end, we use the Log.Picture method. In this section we will consider different ways to place an image into the log. How to do it... The following steps should be performed to place an image to the log: First of all, we will create two image objects for the enabled window and the whole of the screen: var picWindow = Sys.Desktop.ActiveWindow().Picture(); var picDesktop = Sys.Desktop.Picture(); The image of the active window, now being stored in the picWindow variable , will be placed into the log, unchanged: Log.Picture(picWindow, "Active window"); The image of the desktop is reduced by four times via the Stretch method , and then saved on to the file with the help of the SaveToFile method: picDesktop.Stretch(picDesktop.Size.Width/2, picDesktop.Size.Height/2); picDesktop.SaveToFile("c:\desktop.png"); Now we go about creating a new variable of the Picture type, loading up an image into it from the earlier saved file, and then placing the same into the log: var pic = Utils.Picture; pic.LoadFromFile("c:\desktop.png"); Log.Picture(pic, "Resized Desktop"); As a result of function's execution, the log will contain the two images placed therein: that of the enabled window at the moment of test execution, and that of the reduced desktop copy. How it works... The Log.Picture method has one mandatory parameter that is, the image itself; the other parameters being optional. Images of any of the onscreen objects (of a window, of a singular controls element, of the desktop) can be obtained via the Picture method. In our example, with the help of the method, we get the image of the desktop and that of the active window. Instead of the active window, we could use any variable that corresponds to a window or a controls element. Any image can be saved onto the disk with the help of the SaveToFile method. The format of the saved image is determined by its extension (in our case, it is the PNG). If it's necessary to obtain a variable containing the image from the file, we are supposed to create an empty variable placeholder with the help of the Utils.Picture property , and then with the help of the LoadFromFile method , we upload the image into it. In the future, one could handle the image as any other, received with the help of the Picture method. Great-size images can be minified with the help of the Stretch method. The Stretch method uses two parameters: the new width and height of the image. With the help of the Size.Width and Size.Height properties , we could zoom in or out on the image in relation to its original size, without setting the dimensions explicitly. There's more... With the help of the Picture method , we could obtain not only the image of the whole window or a controls element, but just a part of it. For example, the following code gets an image of the upper left square of the desktop within the sizing of 50 x 50 pixels: var picDesktop = Sys.Desktop.Picture(0,0, 50, 50); The values of the parameters are as follows: coordinates of the left and right top corner, and its width and height. There is one important project setting which allows automatic posting images in case of error. To enable this option, right-click on the project name, navigate to Edit | Properties, click on Playback item from the list of options, and enable checkbox Post image on error. Apart from changing the dimensions of the image, TestComplete allows for the execution of several, quite complicated imaging manipulations. For example, the comparison of the two images (the Compare method ), searching for one image inside the other (the Find method ), and so on. Click on the following link to get to know more about these possibilities: http://support.smartbear.com/viewarticle/32131/
Read more
  • 0
  • 0
  • 1708

Packt
26 Nov 2013
6 min read
Save for later

CodeIgniter MVC – The Power of Simplicity!

Packt
26 Nov 2013
6 min read
(For more resources related to this topic, see here.) "Simplicity Wins In Big!" Back in the 80s there was a programming Language ADA that according to many contracts was required to be used. ADA was so complex and hard compared to C/C++ to maintain. Today ADA fades like Pascal. C/C++ is the simplicity winner for real time systems arena. In Telecom for network devices management protocols there were two standards in the 90s: CMIP (Common Management Information Protocol) and SNMP (Simple Network Management Protocol). Initially (90s) all telecom Requirement Papers required CMIP support. Eventually after several years a research found that there's about 1:10 or 10x effort to develop and maintain a same system based CMIP compared to SNMP. SNMP is the simplicity winner in network management systems arena! In VoIP or Media over IP, the H.323 and SIP (Session Initiation Protocol) were competing protocols in early 2000. H.323 had the messages in a cryptic binary way. SIP makes it all textual XML fashioned, easy to understand via text editor. Today almost all end point devices powered SIP while H.323 becomes a niche protocol for the VoIP backbone. SIP is the simplicity winner in VoIP arena! Back in 2010 I was looking for a good PHP platform to develop Web Application for my startup 1st product Logodial Zappix (http://zappix.com). I got a recommendation to use DRUPAL for this. I've tried the platform and found it very heavy to manipulate and change for my exact user interaction flow and experience I had in mind. Many times I had to compromise and the overhead of the platform was indeed horrible. Just make Hello world App and tons of irrelevant code will get into the project. Try to make free JavaScript and you found yourself struggling with the platform disabling you from the creativity of client side JavaScript and its Add-ons. I've decided to look for a better platform for my needs. Later on I've heard about Zend Framework MVC (Model-View-Controller framework typed). I've tried to work with it as it is based MVC and a lot of OOP usage, but I've found it heavy... Documentation seems great at first sight, but the more I've used I, looking for vivid examples and explanations, I found myself in endless close circle loops of links. It was lacking a clear explanation and vivid examples. The filling was like every match box moving task, I'd required a semi-trailer of declarations and calls to handle making it... Though it was MVC typed which I greatly liked. Keeping on with my search, I was looking for simple but powerful MVC based PHP which is my favorite language for server side. One day in early 2011 I got a note from a friend that there's a light and cool platform named CodeIgniter (CI in brief). I've checked the documentation link http://ellislab.com/codeigniter/user-guide/ and was amazed from the very clean, simple, well organized and well explained browsing experience. Having Examples? Yes, lots of clear examples, with great community. It was so great and simple. I felt like those platform designers were doing the best effort to make the simplest and most vivid code, reusable and clean OOP fashion from the infrastructure to the last function. I've tried making web app for a trail, trying to load helpers, libraries and use them and greatly loved the experience. Fast forward, today I see a matured CodeIgniter as a Lego like playground that I know well. I've wrote tons of models, helpers, libraries, controllers and views. CodeIgniter Simplicity enables me to do things fast and, clear and well maintained and expandable. In time I've gathered the most useful helpers and libraries, Ajax server and Browser side solutions for reuse, good links to useful add on such as the free Grid Powered Plug-In for CI the http://www.grocerycrud.com/ that keep improving day by day. Today I see Codeigniter as a matured scalable (See at&t and sprint Call Center Web apps based CI), reusability and simplicity champion. The following is the high-level architecture of the Codeigniter MVC with the Controller/s as the hub the application session. The CI controller main use cases are: Handles requests from web browser as HTTP URI call, based on submitted parameters (for example Submitting a Login with the credentials) or with no-parameters (for example Home Page navigation). Handles Asynchronous Ajax requests from the Web Client mostly as JSON HTTP POST request and response. Serving CRON job requests that creates HTTP URI request, calling controller methods, similar to browser navigation, silently from the CRON PHP module. The CI Views main features: Rendered by a controller with optionally set of parameters (scalar, arrays, objects) Has full open access to all the helpers, libraries, models as their rendering controller has. Has the freedom to integrate any JavaScript / 3rd party Web Client side plug-ins. The CI helper/s main features and fashion: Flat functions sets protected from duplication risks Can be loaded for use by any controller and accessed by any rendered view. Can access any CI resource / library and others via the &get_instance() service. The CI Libraries main features and fashion: OOP classes that can expand other 3rd party classes (For example, see the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). Can be used by the CI project controllers and all their rendered views. The CI Model main features and fashion: Similar to Libraries but has access to the default database, that can be expanded to multi databases and any other CI resource via the &get_instance(). OOP classes that can expand other 3rd party classes (For example, See the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). It seems that CodeIgniter is continuously increasing its popularity as it has a simple yet high quality OOP core that enables great creativity, reusability, and code clarity naming conventions, which are easy to expand (user class extends CI class), while more third-party application plugins (packages of views and/or models and/or libraries and/or helpers). I found Codeigniter flexible, great reusability enabler, having light infrastructure, enables developer creativity powered active global community. For a day to day the CI code clarity, high performance capabilities, minimal controllable footprint (You decide what helpers/libraries/models to load for each controller). Above all CI blessed with very fast learning curve of PHP developers and many blogs and community sites to share knowledge and raise and resolve issues and changes. CodeIgniter is the simplicity winner I've found for Web Apps MVC Server side. Summary This article introduces the CodeIgniter framework, while initially getting started with web-based applications. Resources for Article: Further resources on this subject: Database Interaction with Codeigniter 1.7 [Article] User Authentication with Codeigniter 1.7 using Facebook Connect [Article] CodeIgniter 1.7 and Objects [Article]
Read more
  • 0
  • 0
  • 2415