Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-spring-security-configuring-secure-passwords
Packt
24 May 2010
6 min read
Save for later

Encode your password with Spring Security 3

Packt
24 May 2010
6 min read
This article by Peter Mularien is an excerpt from the book Spring Security 3. In this article, we will: Examine different methods of configuring password encoding Understand the password salting technique of providing additional security to stored passwords (For more resources on Spring, see here.) In any secured system, password security is a critical aspect of trust and authoritativeness of an authenticated principal. Designers of a fully secured system must ensure that passwords are stored in a way in which malicious users would have an impractically difficult time compromising them. The following general rules should be applied to passwords stored in a database: Passwords must not be stored in cleartext (plain text) Passwords supplied by the user must be compared to recorded passwords in the database A user's password should not be supplied to the user upon demand (even if the user forgets it) For the purposes of most applications, the best fit for these requirements involves one-way encoding or encryption of passwords as well as some type of randomization of the encrypted passwords. One-way encoding provides the security and uniqueness properties that are important to properly authenticate users with the added bonus that once encrypted, the password cannot be decrypted. In most secure application designs, it is neither required nor desirable to ever retrieve the user's actual password upon request, as providing the user's password to them without proper additional credentials could present a major security risk. Most applications instead provide the user the ability to reset their password, either by presenting additional credentials (such as their social security number, date of birth, tax ID, or other personal information), or through an email-based system. Storing other types of sensitive information Many of the guidelines listed that apply to passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards). You may already be thinking ahead and wondering, given our (admittedly unrealistic) approach of using SQL to populate our HSQL database with users, how do we encode the passwords? HSQL, or most other databases for that matter, don't offer encryption methods as built-in database functions. Typically, the bootstrap process (populating a system with initial users and data) is handled through some combination of SQL loads and Java code. Depending on the complexity of your application, this process can get very complicated. For the JBCP Pets application, we'll retain the embedded-database declaration and the corresponding SQL, and then add a small bit of Java to fire after the initial load to encrypt all the passwords in the database. For password encryption to work properly, two actors must use password encryption in synchronization ensuring that the passwords are treated and validated consistently. Password encryption in Spring Security is encapsulated and defined by implementations of the o.s.s.authentication.encoding.PasswordEncoder interface. Simple configuration of a password encoder is possible through the <password-encoder> declaration within the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder hash="sha"/> </authentication-provider></authentication-manager> You'll be happy to learn that Spring Security ships with a number of implementations of PasswordEncoder, which are applicable for different needs and security requirements. The implementation used can be specified using the hash attribute of the <password-encoder> declaration. The following table provides a list of the out of the box implementation classes and their benefits. Note that all implementations reside in the o.s.s.authentication. encoding package. Implementation class Description hash value PlaintextPasswordEncoder Encodes the password as plaintext. Default DaoAuthenticationProvider password encoder. plaintext Md4PasswordEncoder PasswordEncoder utilizing the MD4 hash algorithm. MD4 is not a secure algorithm-use of this encoder is not recommended. md4 Md5PasswordEncoder PasswordEncoder utilizing the MD5 one-way encoding algorithm. md5 ShaPasswordEncoder PasswordEncoder utilizing the SHA one-way encoding algorithm. This encoder can support confi gurable levels of encoding strength. Sha   sha-256 LdapShaPasswordEncoder Implementation of LDAP SHA and LDAP SSHA algorithms used in integration with LDAP authentication stores. {sha}   {ssha}   As with many other areas of Spring Security, it's also possible to reference a bean definition implementing PasswordEncoder to provide more precise configuration and allow the PasswordEncoder to be wired into other beans through dependency injection. For JBCP Pets, we'll need to use this bean reference method in order to encode the bootstrapped user data. Let's walk through the process of configuring basic password encoding for the JBCP Pets application. Configuring password encoding Configuring basic password encoding involves two pieces—encrypting the passwords we load into the database after the SQL script executes, and ensuring that the DaoAuthenticationProvider is configured to work with a PasswordEncoder. Configuring the PasswordEncoder First, we'll declare an instance of a PasswordEncoder as a normal Spring bean: <bean class="org.springframework.security.authentication. encoding.ShaPasswordEncoder" id="passwordEncoder"/> You'll note that we're using the SHA-1 PasswordEncoder implementation. This is an efficient one-way encryption algorithm, commonly used for password storage. Configuring the AuthenticationProvider We'll need to configure the DaoAuthenticationProvider to have a reference to the PasswordEncoder, so that it can encode and compare the presented password during user login. Simply add a <password-encoder> declaration and refer to the bean ID we defined in the previous step: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder ref="passwordEncoder"/> </authentication-provider></authentication-manager> Try to start the application at this point, and then try to log in. You'll notice that what were previously valid login credentials are now being rejected. This is because the passwords stored in the database (loaded with the bootstrap test-users-groupsdata. sql script) are not stored in an encrypted form that matches the password encoder. We'll need to post-process the bootstrap data with some simple Java code. Writing the database bootstrap password encoder The approach we'll take for encoding the passwords loaded via SQL is to have a Spring bean that executes an init method after the embedded-database bean is instantiated. The code for this bean, com.packtpub.springsecurity.security. DatabasePasswordSecurerBean, is fairly simple. public class DatabasePasswordSecurerBean extends JdbcDaoSupport { @Autowired private PasswordEncoder passwordEncoder; public void secureDatabase() { getJdbcTemplate().query("select username, password from users", new RowCallbackHandler(){ @Override public void processRow(ResultSet rs) throws SQLException { String username = rs.getString(1); String password = rs.getString(2); String encodedPassword = passwordEncoder.encodePassword(password, null); getJdbcTemplate().update("update users set password = ? where username = ?", encodedPassword,username); logger.debug("Updating password for username: "+username+" to: "+encodedPassword); } }); } } The code uses the Spring JdbcTemplate functionality to loop through all the users in the database and encode the password using the injected PasswordEncoder reference. Each password is updated individually.
Read more
  • 0
  • 0
  • 6901

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 6891

article-image-manipulating-images-javafx
Packt
25 Aug 2010
4 min read
Save for later

Manipulating Images with JavaFX

Packt
25 Aug 2010
4 min read
(For more resources on Java, see here.) One of the most celebrated features of JavaFX is its inherent support for media playback. As of version 1.2, JavaFX has the ability to seamlessly load images in different formats, play audio, and play video in several formats using its built-in components. To achieve platform independence and performance, the support for media playback in JavaFX is implemented as a two-tiered strategy: Platform-independent APIs—the JavaFX SDK comes with a media API designed to provide a uniform set of interfaces to media functionalities. Part of the platform-independence offerings include a portable codec (On2's VP6), which will play on all platforms where JavaFX media playback is supported. Platform-dependent implementations—to boost media playback performance, JavaFX also has the ability to use the native media engine supported by the underlying OS. For instance, playback on the Windows platform may be rendered by the Windows DirectShow media engine (see next recipe). This two-part article shows you how to use the supported media rendering components, including ImageView, MediaPlayer, and MediaView. These components provide high-level APIs that let developers create applications with engaging and interactive media content. Accessing media assets You may have seen the use of variable __DIR__ when accessing local resources, but may not fully know about its purpose and how it works. So, what does that special variable store? In this recipe, we will explore how to use the __DIR__ special variable and other means of loading resources locally or remotely. Getting ready The concepts presented in this recipe are used widely throughout the JavaFX application framework when pointing to resources. In general, classes that point to a local or remote resource uses a string representation of a URL where the resource is stored. This is especially true for the ImageView and MediaPlayer classes discussed in this article. How to do it... This recipe shows you three ways of creating a URL to point to a local or remote resource used by a JavaFX application. The full listing of the code presented here can be found in ch05/source-code/src/UrlAccess.fx. Using the __DIR__ pseudo-variable to access assets as packaged resources: var resImage = "{__DIR__}image.png"; Using a direct reference to a local file: var localImage = "file:/users/home/vladimir/javafx/ch005/source-code/src/image.png"; Using a URL to access a remote file: var remoteImage = "http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg" How it works... Loading media assets in JavaFX requires the use of a well-formatted URL that points to the location of the resources. For instance, both the Image and the Media classes (covered later in this article series) require a URL string to locate and load the resource to be rendered. The URL must be an absolute path that specifies the fully-realized scheme, device, and resource location. The previous code snippets show the following three ways of accessing resources in JavaFX: __DIR__ pseudo-variable—often, you will see the use of JavaFX's pseudo variable __DIR__, used when specifying the location of a resource. It is a special variable that stores the String value of the directory where the executing class that referenced __DIR__ is located. This is valuable, especially when the resource is embedded in the application's JAR file. At runtime, __DIR__ stores the location of the resource in the JAR file, making it accessible for reading as a stream. In the previous code, for example, the expression {__DIR__}image.png explodes as jar:file:/users/home/vladimir/javafx/ch005/source-code/dist/source-code.jar!/image.png. Direct reference to local resources—when the application is deployed as a desktop application, you can specify the location of your resources using URLs that provides the absolute path to where the resources are located. In our code, we use file:/users/home/vladimir/javafx/ch005/source-code/src/image.png as the absolute fully qualified path to the image file image.png. Direct reference to remote resources—finally, when loading media assets, you are able to specify the path of a fully-qualified URL to a remote resource using HTTP. As long as there are no subsequent permissions required, classes such as Image and Media are able to pull down the resource with no problem. For our code, we use a URL to a Flickr image http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg. There's more... Besides __DIR__, JavaFX provides the __FILE__ pseudo variable as well. As you may well guess, __FILE__ resolves to the fully qualified path of the of the JavaFX script file that contains the __FILE__ pseudo variable. At runtime, when your application is compiled, this will be the script class that contains the __FILE__ reference.
Read more
  • 0
  • 0
  • 6887

article-image-python-founder-guido-van-rossum-goes-on-a-permanent-vacation-from-being-bdfl
Savia Lobo
13 Jul 2018
5 min read
Save for later

Python founder resigns - Guido van Rossum goes ‘on a permanent vacation from being BDFL’

Savia Lobo
13 Jul 2018
5 min read
Python is one of the most popular scripting languages widely adopted and loved due to its simplicity.  Since its humble beginnings in the last 80s as an interpreter for the new, a simple-to-read scripting language, it has now come to dominate all of the tech world. Python has become a vital part of web development stacks such as Perl, PHP, and others have been core to domains like security. It is also used in current popular technologies such as AI, ML, and DL. After 28 years of successfully stewarding the Python community since inventing it back in Dec 1989, Guido van Rossum has decided to take himself out of the decision making process of the community as a Benevolent dictator for life (BDFL). Guido still promises to be a part of the core development group. He also added that he will be available to mentor people but most of the times the community will have to manage on their own. Benevolent dictator for life (BDFL) is a term that Guido's fellow Python enthusiasts came up with for him, as a joke, when discussing minutes of the meeting over email regarding leading Python’s development and adoption. Who will look after the Python community now? Guido Van Rossum said, "I am not going to appoint a successor". True to his leadership style, he has thrown his team of core developers into the deep end by asking them to consider what the Python community's new governance model could be. In his memo, he asked, "So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?" Guido's parting advice to the core dev team Guido expressed confidence in his team to continue to manage the day-to-day tasks and operations just as they’ve been doing under his leadership. The two things he wants the core developers and the community to think deeply about are: How the PEPs are decided and How will the new core developers be inducted? He also emphasized the importance of fostering the right community culture militantly through Python's Community Code of Conduct (CoC). He said, "if you don't like that document your only option might be to leave this group voluntarily. Perhaps there are issues to decide like when should someone be kicked out (this could be banning people from python-dev or python-ideas too since those are also covered by the CoC)." He assured the team that while he has stepped down as the BDFL and from all decision-making duties, he will continue to be an active member of the community and will now be more available as a mentor to those on the core development team. Guido's decision to quit seems to have stemmed partly from the physical, mental, and emotional toll that the role has taken on him over years. He concluded his thread on Transfer of Power by saying, "I'm tired, and need a very long break". How is the Python community taking this decision? The development team hopes Guido will make a come back after his well-deserved break. As a BDFL, Guido has provided them with consistency in design and taste. By having Guido as a monitor, the team has had a very consistent view of how the community should behave and this has been an asset for the whole team. Now they have four ways to explore to govern the Python community Find a new BDFL. This option seems highly unlikely as Guido’s legacy is irreplaceable. Besides, it is practically the least robust to rely on one person to take all key decisions and to commit their full time to the community. That person also needs to be well respected and accepted as a de facto head. Set up an N-virate leadership team (a group of 3 (triumvirate) or 5 (quintumvirate) experts). With such a model, the responsibilities and load will be equally distributed among the chosen members from the core development team. This appears to be the current favorite on the thread that opened yesterday. Become a democracy. In this model, the community gets to vote on all key decisions. This seems like the short-term fix the team is gravitating towards. At least to decide on the immediate task at hand. But many on the team acknowledge that this is not a permanent answer as it will pull the language in too many directions and also is time-consuming. Explore the governance model of other open source communities. This option is as being seriously considered in the discussions. Clearly, the community loves Guido, evident from the deluge of well wishes he's receiving from all over the globe. You know you've done your job well when you hear someone say 'You changed my life'. Guido has changed millions of lives for the better. https://twitter.com/AndrewYNg/status/1017664116482162689 https://twitter.com/anthonypjshaw/status/1017610576640393216 https://twitter.com/generativist/status/1017547690228396032 https://twitter.com/bloodyquantum/status/1017558674024218624 Thank you, Guido, for Python, your heart, and your leadership. We know the community will thrive even in your absence because you've cultivated an excellent culture and a great set of minds. Top 7 Python programming books you need to read Python web development: Django vs Flask in 2018 Python experts talk Python on Twitter: Q&A Recap
Read more
  • 0
  • 0
  • 6884

article-image-introduction-wordpress-plugin
Packt
22 Feb 2018
13 min read
Save for later

Introduction to WordPress Plugin

Packt
22 Feb 2018
13 min read
In this article, Yannick Lefebvre, author of Wordpress Plugin Development Cookbook, Second Edition will cover the following recipes: Creating a new shortcode with parameters Managing multiple sets of user settings from a single admin page WordPress shortcodes are a simple, yet powerful tool that can be used to automate the insertion of code into web pages. For example, a shortcode could be used to automate the insertion of videos from a third-party platform that is not supported natively by WordPress, or embed content from a popular web site. By following the two code samples found in this article, you will learn how to create a WordPress plugin that defines your own shortcode to be able to quickly embed Twitter feeds on a web site. You will also learn how to create an administration configuration panel to be able to create a set of configurations that can be referenced when using your newly-created shortcode. Creating a new shortcode with parameters While simple shortcodes already provide a lot of potential to output complex content to a page by entering a few characters in the post editor, shortcodes become even more useful when they are coupled with parameters that will be passed to their associated processing function. Using this technique, it becomes very easy to create a shortcode that accelerates the insertion of external content in WordPress posts or pages by only needing to specify the shortcode and the unique identifier of the source element to be displayed. We will illustrate this concept in this recipe by creating a shortcode that will be used to quickly add Twitter feeds to posts or pages. How to do it... Navigate to the WordPress plugin directory of your development installation. Create a new directory called ch3-twitter-embed. Navigate to this directory and create a new text file called ch3-twitter-embed.php. Open the new file in a code editor and add an appropriate header at the top of the plugin file, naming the plugin Chapter 2 - Twitter Embed. Add the following line of code to declare a new shortcode and specify the name of the function that should be called when the shortcode is found in posts or pages: add_shortcode( 'twitterfeed', 'ch3te_twitter_embed_shortcode' ); Add the following code section to provide an implementation for the ch3te_twitter_embed_shortcode function: function ch3te_twitter_embed_shortcode( $atts ) { extract( shortcode_atts( array( 'user_name' => 'ylefebvre' ), $atts ) ); if ( !empty( $user_name ) ) { $output = '<a class="twitter-timeline" href="'; $output .= esc_url( 'https://twitter.com/' . $user_name ); $output .= '">Tweets by ' . esc_html( $user_name ); $output .= '</a><script async '; $output .= 'src="//platform.twitter.com/widgets.js"'; $output .= ' charset="utf-8"></script>'; } else { $output = ''; } return $output; }. Save and close the plugin file. Log in to the administration page of your development WordPress installation. Click on Plugins in the left-hand navigation menu. Activate your new plugin. Create a new page and use the shortcode [twitterfeed user_name='WordPress'] in the page editor, where WordPress is the Twitter username of the feed to display: Save and view the page to see that the shortcode was replaced by an embedded Twitter feed on your site. Edit the page and remove the user_name parameter and its associated value, only leaving the core [twitterfeed] shortcode in the post and Save. Refresh the page and see that the feed is still being displayed but now shows tweets from another account. How it works... When shortcodes are used with parameters, these extra pieces of data are sent to the associated processing function in the $atts parameter variable. By using a combination of the standard PHP extract and WordPress-specific shortcode_atts functions, our plugin is able to parse the data sent to the shortcode and create an array of identifiers and values that are subsequently transformed into PHP variables that we can use in the rest of our shortcode implementation function. In this specific example, we expect a single variable to be used, called user_name, which will be stored in a PHP variable called $user_name. If the user enters the shortcode without any parameter, a default value of ylefebvre will be assigned to the username variable to ensure that the plugin still works. Since we are going to accept user input in this code, we also verify that the user did not provide an empty string and we use the esc_html and esc_url functions to remove any potentially harmful HTML characters from the input string and make sure that the link destination URL is valid. Once we have access to the twitter username, we can put together the required HTML code that will embed a Twitter feed in our page and display the selected user's tweets. While this example only has one argument, it is possible to define multiple parameters for a shortcode. Managing multiple sets of user settings from a single admin page Throughout this article, you have learned how to create configuration pages to manage single sets of configuration options for our plugins. In some cases, only being able to specify a single set of options will not be enough. For example, looking back at the Twitter embed shortcode plugin that was created, a single configuration panel would only allow users to specify one set of options, such as the desired twitter feed dimensions or the number of tweets to display. A more flexible solution would be to allow users to specify multiple sets of configuration options, which could then be called up by using an extra shortcode parameter (for example, [twitterfeed user_name="WordPress" option_id="2"]). While the first thought that might cross your mind to configure such a plugin is to create a multi-level menu item with submenus to store a number of different settings, this method would produce a very awkward interface for users to navigate. A better way is to use a single panel but give the user a way to select between multiple sets of options to be modified. In this recipe, you will learn how to enhance the previously created Twitter feed shortcode plugin to be able to control the embedded feed size and number of tweets to display from the plugin configuration panel and to give users the ability to specify multiple display sizes. Getting ready You should have already followed the Creating a new shortcode with parameters recipe in the article to have a starting point for this recipe. Alternatively, you can get the resulting code (Chapter 2/ch3-twitter-embed/ch3-twitter-embed.php) from the downloaded code bundle. How to do it... Navigate to the ch3-twitter-embed folder of the WordPress plugin directory of your development installation. Open the ch3-twitter-embed.php file in a text editor. Add the following lines of code to implement an activation callback to initialize plugin options when it is installed or upgraded: register_activation_hook( __FILE__, 'ch3te_set_default_options_array' ); function ch3te_set_default_options_array() { ch3te_get_options(); } Introduction to WordPress Plugin [ 6 ] function ch3te_get_options( $id = 1 ) { $options = get_option( 'ch3te_options_' . $id, array() ); $new_options['setting_name'] = 'Default'; $new_options['width'] = 560; $new_options['number_of_tweets'] = 3; $merged_options = wp_parse_args( $options, $new_options ); $compare_options = array_diff_key( $new_options, $options ); if ( empty( $options ) || !empty( $compare_options ) ) { update_option( 'ch3te_options_' . $id, $merged_options ); } return $merged_options; } Insert the following code segment to register a function to be called when the administration menu is put together. When this happens, the callback function adds an item to the Settings menu and specifies the function to be called to render the configuration page: // Assign function to be called when admin menu is constructed add_action( 'admin_menu', 'ch3te_settings_menu' ); // Function to add item to Settings menu and // specify function to display options page content function ch3te_settings_menu() { add_options_page( 'Twitter Embed Configuration', 'Twitter Embed', 'manage_options', 'ch3te-twitter-embed', 'ch3te_config_page' ); Add the following code to implement the configuration page rendering function: // Function to display options page content function ch3te_config_page() { // Retrieve plugin configuration options from database if ( isset( $_GET['option_id'] ) ) { $option_id = intval( $_GET['option_id'] ); } elseif ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { Introduction to WordPress Plugin [ 7 ] $option_id = 1; } $options = ch3te_get_options( $option_id ); ?> <div id="ch3te-general" class="wrap"> <h3>Twitter Embed</h3> <!-- Display message when settings are saved --> <?php if ( isset( $_GET['message'] ) && $_GET['message'] == '1' ) { ?> <div id='message' class='updated fade'> <p><strong>Settings Saved</strong></p></div> <?php } ?> <!-- Option selector --> <div id="icon-themes" class="icon32"><br></div> <h3 class="nav-tab-wrapper"> <?php for ( $counter = 1; $counter <= 5; $counter++ ) { $temp_options = ch3te_get_options( $counter); $class = ( $counter == $option_id ) ? ' nav-tabactive' : ''; ?> <a class="nav-tab<?php echo $class; ?>" href="<?php echo add_query_arg( array( 'page' => 'ch3te-twitterembed', 'option_id' => $counter ), admin_url( 'options-general.php' ) ); ?>"><?php echo $counter; ?><?php if ( $temp_options !== false ) echo ' (' . $temp_options['setting_name'] . ')'; else echo ' (Empty)'; ?></a> <?php } ?> </h3><br /> <!-- Main options form --> <form name="ch3te_options_form" method="post" action="admin-post.php"> <input type="hidden" name="action" value="save_ch3te_options" /> <input type="hidden" name="option_id" value="<?php echo $option_id; ?>" /> <?php wp_nonce_field( 'ch3te' ); ?> <table> <tr><td>Setting name</td> <td><input type="text" name="setting_name" value="<?php echo esc_html( $options['setting_name'] ); ?>"/> </td> </tr> <tr><td>Feed width</td> <td><input type="text" name="width" Introduction to WordPress Plugin [ 8 ] value="<?php echo esc_html( $options['width'] ); ?>"/></td> </tr> <tr><td>Number of Tweets to display</td> <td><input type="text" name="number_of_tweets" value="<?php echo esc_html( $options['height'] ); ?>"/></td> </tr> </table><br /> <input type="submit" value="Submit" class="buttonprimary" /> </form> </div> <?php } Add the following block of code to register a function that will process user options when submitted to the site: add_action( 'admin_init', 'ch3te_admin_init' ); function ch3te_admin_init() { add_action( 'admin_post_save_ch3te_options', 'process_ch3te_options' ); Add the following code to implement the process_ch3te_options function, declared in the previous block of code, and to declare a utility function used to clean the redirection path: // Function to process user data submission function process_ch3te_options() { // Check that user has proper security level if ( !current_user_can( 'manage_options' ) ) { wp_die( 'Not allowed' ); } // Check that nonce field is present check_admin_referer( 'ch3te' ); // Check if option_id field was present if ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { $option_id = 1; } // Build option name and retrieve options $options = ch3te_get_options( $option_id ); // Cycle through all text fields and store their Introduction to WordPress Plugin [ 9 ] values foreach ( array( 'setting_name' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = sanitize_text_field( $_POST[$param_name] ); } } // Cycle through all numeric fields, convert to int and store foreach ( array( 'width', 'number_of_tweets' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = intval( $_POST[$param_name] ); } } // Store updated options array to database $options_name = 'ch3te_options_' . $option_id; update_option( $options_name, $options ); $cleanaddress = add_query_arg( array( 'message' => 1, 'option_id' => $option_id, 'page' => 'ch3te-twitter-embed' ), admin_url( 'options-general.php' ) ); wp_redirect( $cleanaddress ); exit; } // Function to process user data submission function process_ch3te_options() { // Check that user has proper security level if ( !current_user_can( 'manage_options' ) ) { wp_die( 'Not allowed' ); } // Check that nonce field is present check_admin_referer( 'ch3te' ); // Check if option_id field was present if ( isset( $_POST['option_id'] ) ) { $option_id = intval( $_POST['option_id'] ); } else { $option_id = 1; } // Build option name and retrieve options $options = ch3te_get_options( $option_id ); // Cycle through all text fields and store their values foreach ( array( 'setting_name' ) as $param_name ) { if ( isset( $_POST[$param_name] ) ) { $options[$param_name] = sanitize_text_field( $_POST[$param_name] ); } } Find the ch3te_twitter_embed_shortcode function and modify it as follows to accept the new option_id parameter and load the plugin options to produce the desired output. The changes are identified in bold within the recipe: function ch3te_twitter_embed_shortcode( $atts ) { extract( shortcode_atts( array( 'user_name' => 'ylefebvre', 'option_id' => '1' ), $atts ) ); if ( intval( $option_id ) < 1 || intval( $option_id ) > 5 ) { $option_id = 1; } $options = ch3te_get_options( $option_id ); if ( !empty( $user_name ) ) { $output = '<a class="twitter-timeline" href="'; $output .= esc_url( 'https://twitter.com/' . $user_name ); $output .= '" data-width="' . $options['width'] . Save and close the plugin file. Deactivate and then Activate the Chapter 2 - Twitter Embed plugin from the administration interface to execute its activation function and create default settings. Navigate to the Settings menu and select the Twitter Embed submenu item to see the newly created configuration panel with the first set of options being displayed and more sets of options accessible through the drop-down list shown at the top of the page. To select the set of options to be used, add the parameter option_id to the shortcode used to display a Twitter feed, as follows: [twitterfeed user_name="WordPress" option_id="1"] How it works... This recipe shows how we can leverage options arrays to create multiple sets of options simply by creating the name of the options array on the fly. Instead of having a specific option name in the first parameter of the get_option function call, we create a string with an option ID. This ID is sent through as a URL parameter on the configuration page and as a hidden text field when processing the form data. On initialization, the plugin only creates a single set of options, which is probably enough for most casual users of the plugin. Doing so will avoid cluttering the site database with useless options. When the user requests to view one of the empty option sets, the plugin creates a new set of options right before rendering the options page. The rest of the code is very similar to the other examples that we saw in this article, since the way to access the array elements remains the same. Summary In this article, the author has explained about the entire process of how to create a new shortcode with parameters and how to manage multiple sets of user settings from a single admin page.
Read more
  • 0
  • 0
  • 6881

article-image-deploying-highly-available-openstack
Packt
21 Sep 2015
17 min read
Save for later

Deploying Highly Available OpenStack

Packt
21 Sep 2015
17 min read
In this article by Arthur Berezin, the author of the book OpenStack Configuration Cookbook, we will cover the following topics: Installing Pacemaker Installing HAProxy Configuring Galera cluster for MariaDB Installing RabbitMQ with mirrored queues Configuring highly available OpenStack services (For more resources related to this topic, see here.) Many organizations choose OpenStack for its distributed architecture and ability to deliver the Infrastructure as a Service (IaaS) platform for mission-critical applications. In such environments, it is crucial to configure all OpenStack services in a highly available configuration to provide as much possible uptime for the control plane services of the cloud. Deploying a highly available control plane for OpenStack can be achieved in various configurations. Each of these configurations would serve certain set of demands and introduce a growing set of prerequisites. Pacemaker is used to create active-active clusters to guarantee services' resilience to possible faults. Pacemaker is also used to create a virtual IP addresses for each of the services. HAProxy serves as a load balancer for incoming calls to service's APIs. This article discusses neither high availably of virtual machine instances nor Nova-Compute service of the hypervisor. Most of the OpenStack services are stateless, OpenStack services store persistent in a SQL database, which is potentially a single point of failure we should make highly available. In this article, we will deploy a highly available database using MariaDB and Galera, which implements multimaster replication. To ensure availability of the message bus, we will configure RabbitMQ with mirrored queues. This article discusses configuring each service separately on three controllers' layout that runs OpenStack controller services, including Neutron, database, and RabbitMQ message bus. All can be configured on several controller nodes, or each service could be implemented on its separate set of hosts. Installing Pacemaker All OpenStack services consist of system Linux services. The first step of ensuring services' availability is to configure Pacemaker clusters for each service, so Pacemaker monitors the services. In case of failure, Pacemaker restarts the failed service. In addition, we will use Pacemaker to create a virtual IP address for each of OpenStack's services to ensure services are accessible using the same IP address when failures occurs and the actual service has relocated to another host. In this section, we will install Pacemaker and prepare it to configure highly available OpenStack services. Getting ready To ensure maximum availability, we will install and configure three hosts to serve as controller nodes. Prepare three controller hosts with identical hardware and network layout. We will base our configuration for most of the OpenStack services on the configuration used in a single controller layout, and we will deploy Neutron network services on all three controller nodes. How to do it… Run the following steps on three highly available controller nodes: Install pacemaker packages: [root@controller1 ~]# yum install -y pcs pacemaker corosync fence-agents-all resource-agents Enable and start the pcsd service: [root@controller1 ~]# systemctl enable pcsd [root@controller1 ~]# systemctl start pcsd Set a password for hacluster user; the password should be identical on all the nodes: [root@controller1 ~]# echo 'password' | passwd --stdin hacluster We will use the hacluster password through the HAProxy configuration. Authenticate all controller nodes running using -p option to give the password on the command line, and provide the same password you have set in the previous step: [root@controller1 ~] # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force At this point, you may run pcs commands from a single controller node instead of running commands on each node separately. [root@controller1 ~]# rabbitmqctl set_policy HA '^(?!amq.).*' '{"ha-mode": "all"}' There's more... You may find the complete Pacemaker documentation, which includes installation documentation, complete configuration reference, and examples in Cluster Labs website at http://clusterlabs.org/doc/. Installing HAProxy Addressing high availability for OpenStack includes avoiding high load of a single host and ensuring incoming TCP connections to all API endpoints are balanced across the controller hosts. We will use HAProxy, an open source load balancer, which is particularly suited for HTTP load balancing as it supports session persistence and layer 7 processing. Getting ready In this section, we will install HAProxy on all controller hosts, configure Pacemaker cluster for HAProxy services, and prepare for OpenStack services configuration. How to do it... Run the following steps on all controller nodes: Install HAProxy package: # yum install -y haproxy Enable nonlocal binding Kernel parameter: # echo net.ipv4.ip_nonlocal_bind=1 >> /etc/sysctl.d/haproxy.conf # echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind Configure HAProxy load balancer settings for the GaleraDB, RabbitMQ, and Keystone service as shown in the following diagram: Edit /etc/haproxy/haproxy.cfg with the following configuration: global    daemon defaults    mode tcp    maxconn 10000    timeout connect 2s    timeout client 10s    timeout server 10s   frontend vip-db    bind 192.168.16.200:3306    timeout client 90s    default_backend db-vms-galera   backend db-vms-galera    option httpchk    stick-table type ip size 2    stick on dst    timeout server 90s    server rhos5-db1 192.168.16.58:3306 check inter 1s port 9200    server rhos5-db2 192.168.16.59:3306 check inter 1s port 9200    server rhos5-db3 192.168.16.60:3306 check inter 1s port 9200   frontend vip-rabbitmq    bind 192.168.16.213:5672    timeout client 900m    default_backend rabbitmq-vms   backend rabbitmq-vms    balance roundrobin    timeout server 900m    server rhos5-rabbitmq1 192.168.16.61:5672 check inter 1s    server rhos5-rabbitmq2 192.168.16.62:5672 check inter 1s    server rhos5-rabbitmq3 192.168.16.63:5672 check inter 1s   frontend vip-keystone-admin    bind 192.168.16.202:35357    default_backend keystone-admin-vms backend keystone-admin-vms    balance roundrobin    server rhos5-keystone1 192.168.16.64:35357 check inter 1s    server rhos5-keystone2 192.168.16.65:35357 check inter 1s    server rhos5-keystone3 192.168.16.66:35357 check inter 1s   frontend vip-keystone-public    bind 192.168.16.202:5000    default_backend keystone-public-vms backend keystone-public-vms    balance roundrobin    server rhos5-keystone1 192.168.16.64:5000 check inter 1s    server rhos5-keystone2 192.168.16.65:5000 check inter 1s    server rhos5-keystone3 192.168.16.66:5000 check inter 1s This configuration file is an example for configuring HAProxy with load balancer for the MariaDB, RabbitMQ, and Keystone service. We need to authenticate on all nodes before we are allowed to change the configuration to configure all nodes from one point. Use the previously configured hacluster user and password to do this. # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force Create a Pacemaker cluster for HAProxy service as follows: Note that you can run pcs commands now from a single controller node. # pcs cluster setup --name ha-controller controller1 controller2 controller3 # pcs cluster enable --all # pcs cluster start --all Finally, using pcs resource create command, create a cloned systemd resource that will run a highly available active-active HAProxy service on all controller hosts: pcs resource create lb-haproxy systemd:haproxy op monitor start-delay=10s --clone Create the virtual IP address for each of the services: # pcs resource create vip-db IPaddr2 ip=192.168.16.200 # pcs resource create vip-rabbitmq IPaddr2 ip=192.168.16.213 # pcs resource create vip-keystone IPaddr2 ip=192.168.16.202 You may use pcs status command to verify whether all resources are successfully running: # pcs status Configuring Galera cluster for MariaDB Galera is a multimaster cluster for MariaDB, which is based on synchronous replication between all cluster nodes. Effectively, Galera treats a cluster of MariaDB nodes as one single master node that reads and writes to all nodes. Galera replication happens at transaction commit time, by broadcasting transaction write set to the cluster for application. Client connects directly to the DBMS and experiences close to the native DBMS behavior. wsrep API (write set replication API) defines the interface between Galera replication and the DBMS: Getting ready In this section, we will install Galera cluster packages for MariaDB on our three controller nodes, then we will configure Pacemaker to monitor all Galera services. Pacemaker can be stopped on all cluster nodes, as shown, if it is running from previous steps: # pcs cluster stop --all How to do it.. Perform the following steps on all controller nodes: Install galera packages for MariaDB: # yum install -y mariadb-galera-server xinetd resource-agents Edit /etc/sysconfig/clustercheck and add the following lines: MYSQL_USERNAME="clustercheck" MYSQL_PASSWORD="password" MYSQL_HOST="localhost" Edit Galera configuration file /etc/my.cnf.d/galera.cnf with the following lines: Make sure to enter host's IP address at the bind-address parameter. [mysqld] skip-name-resolve=1 binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=0 query_cache_type=0 bind-address=[host-IP-address] wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_name="galera_cluster" wsrep_slave_threads=1 wsrep_certify_nonPK=1 wsrep_max_ws_rows=131072 wsrep_max_ws_size=1073741824 wsrep_debug=0 wsrep_convert_LOCK_to_trx=0 wsrep_retry_autocommit=1 wsrep_auto_increment_control=1 wsrep_drupal_282555_workaround=0 wsrep_causal_reads=0 wsrep_notify_cmd= wsrep_sst_method=rsync You can learn more on each of the Galera's default options on the documentation page at http://galeracluster.com/documentation-webpages/configuration.html. Add the following lines to the xinetd configuration file /etc/xinetd.d/galera-monitor: service galera-monitor {        port           = 9200        disable         = no        socket_type     = stream        protocol       = tcp        wait           = no        user           = root        group           = root        groups         = yes        server         = /usr/bin/clustercheck        type           = UNLISTED        per_source     = UNLIMITED        log_on_success =        log_on_failure = HOST        flags           = REUSE } Start and enable the xinetd service: # systemctl enable xinetd # systemctl start xinetd # systemctl enable pcsd # systemctl start pcsd Authenticate on all nodes. Use the previously configured hacluster user and password to do this as follows: # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force Now commands can be run from a single controller node. Create a Pacemaker cluster for Galera service: # pcs cluster setup --name controller-db controller1 controller2 controller3 # pcs cluster enable --all # pcs cluster start --all Add the Galera service resource to the Galera Pacemaker cluster: # pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://controller1,controller2,controll er3" meta master-max=3 ordered=true op promote timeout=300s on- fail=block --master Create a user for CLusterCheck xinetd service: mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'password';" See also You can find the complete Galera documentation, which includes installation documentation and complete configuration reference and examples in Galera cluster website at http://galeracluster.com/documentation-webpages/. Installing RabbitMQ with mirrored queues RabbitMQ is used as a message bus for services to inner-communicate. The queues are located on a single node that makes the RabbitMQ service a single point of failure. To avoid RabbitMQ being a single point of failure, we will configure RabbitMQ to use mirrored queues across multiple nodes. Each mirrored queue consists of one master and one or more slaves, with the oldest slave being promoted to the new master if the old master disappears for any reason. Messages published to the queue are replicated to all slaves. Getting Ready In this section, we will install RabbitMQ packages on our three controller nodes and configure RabbitMQ to mirror its queues across all controller nodes, then we will configure Pacemaker to monitor all RabbitMQ services. How to do it.. Perform the following steps on all controller nodes: Install RabbitMQ packages on all controller nodes: # yum -y install rabbitmq-server Start and enable rabbitmq-server service: # systemctl start rabbitmq-server # systemctl stop rabbitmq-server RabbitMQ cluster nodes use a cookie to determine whether they are allowed to communicate with each other; for nodes to be able to communicate, they must have the same cookie. Copy erlang.cookie from controller1 to controller2 and controller3: [root@controller1 ~]# scp /var/lib/rabbitmq/.erlang.cookie root@controller2:/var/lib/rabbitmq/ [root@controller1 ~]## scp /var/lib/rabbitmq/.erlang.cookie root@controller3:/var/lib/rabbitmq/ Start and enable Pacemaker on all nodes: # systemctl enable pcsd # systemctl start pcsd Since we already authenticated all nodes of the cluster in the previous section, we can now run following commands on controller1. Create a new Pacemaker cluster for RabbitMQ service as follows: [root@controller1 ~]# pcs cluster setup --name rabbitmq controller1 controller2 controller3 [root@controller1 ~]# pcs cluster enable --all [root@controller1 ~]# pcs cluster start --all To the Pacemaker cluster, add a systemd resource for RabbitMQ service: [root@controller1 ~]# pcs resource create rabbitmq-server systemd:rabbitmq-server op monitor start-delay=20s --clone Since all RabbitMQ nodes must join the cluster one at a time, stop RabbitMQ on controller2 and controller3: [root@controller2 ~]# rabbitmqctl stop_app [root@controller3 ~]# rabbitmqctl stop_app Join controller2 to the cluster and start RabbitMQ on it: [root@controller2 ~]# rabbitmqctl join_cluster rabbit@controller1 [root@controller2 ~]# rabbitmqctl start_app Now join controller3 to the cluster as well and start RabbitMQ on it: [root@controller3 ~]# rabbitmqctl join_cluster rabbit@controller1 [root@controller3 ~]# rabbitmqctl start_app At this point, the cluster should be configured and we need to set RabbitMQ's HA policy to mirror the queues to all RabbitMQ cluster nodes as follows: There's more.. The RabbitMQ cluster should be configured with all the queues cloned to all controller nodes. To verify cluster's state, you can use the rabbitmqctl cluster_status and rabbitmqctl list_policies commands from each of controller nodes as follows: [root@controller1 ~]# rabbitmqctl cluster_status [root@controller1 ~]# rabbitmqctl list_policies To verify Pacemaker's cluster status, you may use pcs status command as follows: [root@controller1 ~]# pcs status See also For a complete documentation on how RabbitMQ implements the mirrored queues feature and additional configuration options, you can refer to project's documentation pages at https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/ha.html. Configuring Highly OpenStack Services Most OpenStack services are stateless web services that keep persistent data on a SQL database and use a message bus for inner-service communication. We will use Pacemaker and HAProxy to run OpenStack services in an active-active highly available configuration, so traffic for each of the services is load balanced across all controller nodes and cloud can be easily scaled out to more controller nodes if needed. We will configure Pacemaker clusters for each of the services that will run on all controller nodes. We will also use Pacemaker to create a virtual IP addresses for each of OpenStack's services, so rather than addressing a specific node, services will be addressed by their corresponding virtual IP address. We will use HAProxy to load balance incoming requests to the services across all controller nodes. Get Ready In this section, we will use the virtual IP address we created for the services with Pacemaker and HAProxy in previous sections. We will also configure OpenStack services to use the highly available Galera-clustered database, and RabbitMQ with mirrored queues. This is an example for the Keystone service. Please refer to the Packt website URL here for complete configuration of all OpenStack services. How to do it.. Perform the following steps on all controller nodes: Install the Keystone service on all controller nodes: yum install -y openstack-keystone openstack-utils openstack-selinux Generate a Keystone service token on controller1 and copy it to controller2 and controller3 using scp: [root@controller1 ~]# export SERVICE_TOKEN=$(openssl rand -hex 10) [root@controller1 ~]# echo $SERVICE_TOKEN > ~/keystone_admin_token [root@controller1 ~]# scp ~/keystone_admin_token root@controller2:~/keystone_admin_token Export the Keystone service token on controller2 and controller3 as well: [root@controller2 ~]# export SERVICE_TOKEN=$(cat ~/keystone_admin_token) [root@controller3 ~]# export SERVICE_TOKEN=$(cat ~/keystone_admin_token) Note: Perform the following commands on all controller nodes. Configure the Keystone service on all controller nodes to use vip-rabbit: # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN # openstack-config --set /etc/keystone/keystone.conf DEFAULT rabbit_host vip-rabbitmq Configure the Keystone service endpoints to point to Keystone virtual IP: # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_endpoint 'http://vip-keystone:%(admin_port)s/' # openstack-config --set /etc/keystone/keystone.conf DEFAULT public_endpoint 'http://vip-keystone:%(public_port)s/' Configure Keystone to connect to the SQL databases use Galera cluster virtual IP: # openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:keystonetest@vip-mysql/keystone # openstack-config --set /etc/keystone/keystone.conf database max_retries -1 On controller1, create Keystone KPI and sync the database: [root@controller1 ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller1 ~]# chown -R keystone:keystone /var/log/keystone   /etc/keystone/ssl/ [root@controller1 ~] su keystone -s /bin/sh -c "keystone-manage db_sync" Using scp, copy Keystone SSL certificates from controller1 to controller2 and controller3: [root@controller1 ~]# rsync -av /etc/keystone/ssl/ controller2:/etc/keystone/ssl/ [root@controller1 ~]# rsync -av /etc/keystone/ssl/ controller3:/etc/keystone/ssl/ Make sure that Keystone user is owner of newly copied files controller2 and controller3: [root@controller2 ~]# chown -R keystone:keystone /etc/keystone/ssl/ [root@controller3 ~]# chown -R keystone:keystone /etc/keystone/ssl/ Create a systemd resource for the Keystone service, use --clone to ensure it runs with active-active configuration: [root@controller1 ~]# pcs resource create keystone systemd:openstack-keystone op monitor start-delay=10s --clone Create endpoint and user account for Keystone with the Keystone VIP as given: [root@controller1 ~]# export SERVICE_ENDPOINT="http://vip-keystone:35357/v2.0" [root@controller1 ~]# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" [root@controller1 ~]# keystone endpoint-create --service keystone --publicurl 'http://vip-keystone:5000/v2.0' --adminurl 'http://vip-keystone:35357/v2.0' --internalurl 'http://vip-keystone:5000/v2.0'   [root@controller1 ~]# keystone user-create --name admin --pass keystonetest [root@controller1 ~]# keystone role-create --name admin [root@controller1 ~]# keystone tenant-create --name admin [root@controller1 ~]# keystone user-role-add --user admin --role admin --tenant admin Create all controller nodes on a keystonerc_admin file with OpenStack admin credentials using the Keystone VIP: cat > ~/keystonerc_admin << EOF export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=password export OS_AUTH_URL=http://vip-keystone:35357/v2.0/ export PS1='[u@h W(keystone_admin)]$ ' EOF Source the keystonerc_admin credentials file to be able to run the authenticated OpenStack commands: [root@controller1 ~]# source ~/keystonerc_admin At this point, you should be able to execute the Keystone commands and create the Services tenant: [root@controller1 ~]# keystone tenant-create --name services --description "Services Tenant" Summary In this article, we have covered the installation of Pacemaker and HAProxy, configuration of Galera cluster for MariaDB, installation of RabbitMQ with mirrored queues, and configuration of highly available OpenStack services. Resources for Article: Further resources on this subject: Using the OpenStack Dash-board [article] Installing OpenStack Swift [article] Architecture and Component Overview [article]
Read more
  • 0
  • 0
  • 6878
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 6865

article-image-reconstructing-3d-scenes
Packt
13 Oct 2016
25 min read
Save for later

Reconstructing 3D Scenes

Packt
13 Oct 2016
25 min read
In this article by Robert Laganiere, the author of the book OpenCV 3 Computer Vision Application Programming Cookbook Third Edition, has covered the following recipes: Calibrating a camera Recovering camera pose (For more resources related to this topic, see here.) Digital image formation Let us now redraw a new version of the figure describing the pin-hole camera model. More specifically, we want to demonstrate the relation between a point in 3D at position (X,Y,Z) and its image (x,y), on a camera specified in pixel coordinates. Note the changes that have been made to the original figure. First, a reference frame was positioned at the center of the projection; then, the Y-axis was alligned to point downwards to get a coordinate system that is compatible with the usual convention, which places the image origin at the upper-left corner of the image. Finally, we have identified a special point on the image plane, by considering the line coming from the focal point that is orthogonal to the image plane. The point (u0,v0) is the pixel position at which this line pierces the image plane and is called the principal point. It would be logical to assume that this principal point is at the center of the image plane, but in practice, this one might be off by few pixels depending on the precision of the camera. Since we are dealing with digital images, the number of pixels on the image plane (its resolution) is another important characteristic of a camera. We learned previously that a 3D point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by the pixel width (px) and then the height (py). Note that by dividing the focal length given in world units (generally given in millimeters) by px, we obtain the focal length expressed in (horizontal) pixels. We will then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel unit. The complete projective equation is therefore as shown: We know that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. Also, the physical size of a pixel can be obtained by dividing the size of the image sensor (generally in millimeters) by the number of pixels (horizontally or vertically). In modern sensor, pixels are generally of square shape, that is, they have the same horizontal and vertical size. The preceding equations can be rewritten in matrix form. Here is the complete projective equation in its most general form: Calibrating a camera Camera calibration is the process by which the different camera parameters (that is, the ones appearing in the projective equation) are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. However, accurate calibration information can be obtained by undertaking an appropriate camera calibration step. An active camera calibration procedure will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process that has been made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is to show it a set of scene points for which the 3D positions are known. Then, you need to observe where these points project on the image. With the knowledge of a sufficient number of 3D points and associated 2D image points, the exact camera parameters can be inferred from the projective equation. Obviously, for accurate results, we need to observe as many points as possible. One way to achieve this would be to take a picture of a scene with known 3D points, but in practice, this is rarely feasible. A more convenient way is to take several images of a set of 3D points from different viewpoints. This approach is simpler, but it requires you to compute the position of each camera view in addition to the computation of the internal camera parameters, which is fortunately feasible. OpenCV proposes that you use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0, with the X and Y axes well-aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. The following is an example of a calibration pattern image made of 7x5 inner corners as captured during the calibration step: The good thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (the number of horizontal and vertical inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false, as shown: //output vectors of image points std::vector<cv::Point2f> imageCorners; //number of inner corners on the chessboard cv::Size boardSize(7,5); //Get the chessboard corners bool found = cv::findChessboardCorners(image, // image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners The output parameter, imageCorners, will simply contain the pixel coordinates of the detected inner corners of the shown pattern. Note that this function accepts additional parameters if you need to tune the algorithm, which are not discussed here. There is also a special function that draws the detected corners on the chessboard image, with lines connecting them in a sequence: //Draw the corners cv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The following image is obtained: The lines that connect the points show the order in which the points are listed in the vector of detected image points. To perform a calibration, we now need to specify the corresponding 3D points. You can specify these points in the units of your choice (for example, in centimeters or in inches); however, the simplest is to assume that each square represents one unit. In that case, the coordinates of the first point would be (0,0,0) (assuming that the board is located at a depth of Z=0), the coordinates of the second point would be (1,0,0), and so on, the last point being located at (6,4,0). There are a total of 35 points in this pattern, which is too less to obtain an accurate calibration. To get more points, you need to show more images of the same calibration pattern from various points of view. To do so, you can either move the pattern in front of the camera or move the camera around the board; from a mathematical point of view, this is completely equivalent. The OpenCV calibration function assumes that the reference frame is fixed on the calibration pattern and will calculate the rotation and translation of the camera with respect to the reference frame. Let‘s now encapsulate the calibration process in a CameraCalibrator class. The attributes of this class are as follows: // input points: // the points in world coordinates // (each square is one unit) std::vector<std::vector<cv::Point3f>> objectPoints; // the image point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; Note that the input vectors of the scene and image points are in fact made of std::vector of point instances; each vector element is a vector of the points from one view. Here, we decided to add the calibration points by specifying a vector of the chessboard image filename as input, the method will take care of extracting the point coordinates from the images: // Open chessboard images and extract corner points int CameraCalibrator::addChessboardPoints(const std::vector<std::string>& filelist, // list of filenames cv::Size & boardSize) { // calibration noard size // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners(image, //image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners // Get subpixel accuracy on the corners if (found) { cv::cornerSubPix(image, imageCorners, cv::Size(5, 5), // half size of serach window cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS,30,// max number of iterations 0.1)); // min accuracy // If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes; } The first loop inputs the 3D coordinates of the chessboard and the corresponding image points are the ones provided by the cv::findChessboardCorners function; this is done for all the available viewpoints. Moreover, in order to obtain a more accurate image point location, the cv::cornerSubPix function can be used, and as the name suggests, the image points will then be localized at subpixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines the maximum number of iterations and the minimum accuracy in subpixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners have been successfully detected, these points are added to the vectors of the image and scene points using our addPoints method. Once a sufficient number of chessboard images have been processed (and consequently, a large number of 3D scene point / 2D image point correspondences are available), we can initiate the computation of the calibration parameters as shown: // Calibrate the camera // returns the re-projection error double CameraCalibrator::calibrate(cv::Size &imageSize){ //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix, // output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs, // Rs, Ts flag); // set options } In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. These will be described in the next section. How it works... In order to explain the result of the calibration, we need to go back to the projective equation presented in the introduction of this article. This equation describes the transformation of a 3D point into a 2D point through the successive application of two matrices. The first matrix includes all of the camera parameters, which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that explicitly returns the value of the intrinsic parameters given by a calibration matrix. The second matrix is there to have the input points expressed into camera-centric coordinates. It is composed of a rotation vector (a 3x3 matrix) and a translation vector (a 3x1 matrix). Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (made of a rotation component represented by the matrix entries r1 to r9 and a translation represented by t1, t2, and t3) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The calibration results provided by the cv::calibrateCamera are obtained through an optimization process. This process aims to find the intrinsic and extrinsic parameters that minimizes the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all the points specified during the calibration is called the re-projection error. The intrinsic parameters of our test camera obtained from a calibration based on the 27 chessboard images are fx=409 pixels; fy=408 pixels; u0=237; and v0=171. Our calibration images have a size of 536x356 pixels. From the calibration results, you can see that, as expected, the principal point is close to the center of the image, but yet off by few pixels. The calibration images were taken using a Nikon D500 camera with an 18mm lens. Looking at the manufacturer specifitions, we find that the sensor size of this camera is 23.5mm x 15.7mm which gives us a pixel size of 0.0438mm. The estimated focal length is expressed in pixels, so multiplying the result by the pixel size gives us an estimated focal length of 17.8mm, which is consistent with the actual lens we used. Let us now turn our attention to the distortion parameters. So far, we have mentioned that under the pin-hole camera model, we can neglect the effect of the lens. However, this is only possible if the lens that is used to capture an image does not introduce important optical distortions. Unfortunately, this is not the case with lower quality lenses or with lenses that have a very short focal length. Even the lens we used in this experiment introduced some distortion, that is, the edges of the rectangular board are curved in the image. Note that this distortion becomes more important as we move away from the center of the image. This is a typical distortion observed with a fish-eye lens and is called radial distortion. It is possible to compensate for these deformations by introducing an appropriate distortion model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation, which will correct the distortions, can be obtained together with the other camera parameters during the calibration phase. Once this is done, any image from the newly calibrated camera will be undistorted. Therefore, we have added an additional method to our calibration class. //remove distortion in an image (after calibration) cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { //called once per calibration cv::initUndistortRectifyMap(cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted; } Running this code on one of our calibration image results in the following undistorted image: To correct the distortion, OpenCV uses a polynomial function that is applied to the image points in order to move them at their undistorted position. By default, five coefficients are used; a model made of eight coefficients is also available. Once these coefficients are obtained, it is possible to compute two cv::Mat mapping functions (one for the x coordinate and one for the y coordinate) that will give the new undistorted position of an image point on a distorted image. This is computed by the cv::initUndistortRectifyMap function, and the cv::remap function remaps all the points of an input image to a new image. Note that because of the nonlinear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There‘s more... More options are available when it comes to camera calibration. Calibration with known intrinsic parameters When a good estimate of the camera’s intrinsic parameters is known, it could be advantageous to input them in the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the cv::CALIB_USE_INTRINSIC_GUESS flag and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (cv::CALIB_FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (cv::CALIB_FIX_RATIO); in which case, you assume that the pixels have a square shape. Using a grid of circles for calibration Instead of the usual chessboard pattern, OpenCV also offers the possibility to calibrate a camera by using a grid of circles. In this case, the centers of the circles are used as calibration points. The corresponding function is very similar to the function we used to locate the chessboard corners, for example: cv::Size boardSize(7,7); std::vector<cv::Point2f> centers; bool found = cv:: findCirclesGrid(image, boardSize, centers); See also The A flexible new technique for camera calibration article by Z. Zhang  in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, 2000, is a classic paper on the problem of camera calibration Recovering camera pose When a camera is calibrated, it becomes possible to relate the captured with the outside world. If the 3D structure of an object is known, then one can predict how the object will be imaged on the sensor of the camera. The process of image formation is indeed completely described by the projective equation that was presented at the beginning of this article. When most of the terms of this equation are known, it becomes possible to infer the value of the other elements (2D or 3D) through the observation of some images. In this recipe, we will look at the camera pose recovery problem when a known 3D structure is observed. How to do it... Lets consider a simple object here, a bench in a park. We took an image of it using the camera/lens system calibrated in the previous recipe. We have manually identified 8 distinct image points on the bench that we will use for our camera pose estimation. Having access to this object makes it possible to make some physical measurements. The bench is composed of a seat of size 242.5cmx53.5cmx9cm and a back of size 242.5cmx24cmx9cm that is fixed 12cm over the seat. Using this information, we can then easily derive the 3D coordinates of the eight identified points in an object-centric reference frame (here we fixed the origin at the left extremity of the intersection between the two planes). We can then create a vector of cv::Point3f containing these coordinates. //Input object points std::vector<cv::Point3f> objectPoints; objectPoints.push_back(cv::Point3f(0, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 21, 0)); objectPoints.push_back(cv::Point3f(0, 21, 0)); objectPoints.push_back(cv::Point3f(0, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, 44.5)); objectPoints.push_back(cv::Point3f(0, 9, 44.5)); The question now is where the camera was with respect to these points when the shown picture was taken. Since the coordinates of the image of these known points on the 2D image plane are also known, it becomes easy to answer this question using the cv::solvePnP function. Here, the correspondence between the 3D and the 2D points has been established manually, but as a reader of this book, you should be able to come up with methods that would allow you to obtain this information automatically. //Input image points std::vector<cv::Point2f> imagePoints; imagePoints.push_back(cv::Point2f(136, 113)); imagePoints.push_back(cv::Point2f(379, 114)); imagePoints.push_back(cv::Point2f(379, 150)); imagePoints.push_back(cv::Point2f(138, 135)); imagePoints.push_back(cv::Point2f(143, 146)); imagePoints.push_back(cv::Point2f(381, 166)); imagePoints.push_back(cv::Point2f(345, 194)); imagePoints.push_back(cv::Point2f(103, 161)); // Get the camera pose from 3D/2D points cv::Mat rvec, tvec; cv::solvePnP(objectPoints, imagePoints, // corresponding 3D/2D pts cameraMatrix, cameraDistCoeffs, // calibration rvec, tvec); // output pose // Convert to 3D rotation matrix cv::Mat rotation; cv::Rodrigues(rvec, rotation); This function computes the rigid transformation (rotation and translation) that brings the object coordinates in the camera-centric reference frame (that is, the ones that has its origin at the focal point). It is also important to note that the rotation computed by this function is given in the form of a 3D vector. This is a compact representation in which the rotation to apply is described by a unit vector (an axis of rotation) around which the object is rotated by a certain angle. This axis-angle representation is also called the Rodrigues’ rotation formula. In OpenCV, the angle of rotation corresponds to the norm of the output rotation vector, which is later aligned with the axis of rotation. This is why the cv::Rodrigues function is used to obtain the 3D matrix of rotation that appears in our projective equation. The pose recovery procedure described here is simple, but how do we know we obtained the right camera/object pose information. We can visually assess the quality of the results using the cv::viz module that gives us the ability to visualize 3D information. The use of this module is explained in the last section of this recipe. For now, lets display a simple 3D representation of our object and the camera that captured it: It might be difficult to judge of the quality of the pose recovery just by looking at this image but if you test the example of this recipe on your computer, you will have the possibility to move this representation in 3D using your mouse which should give you a better sense of the solution obtained. How it works... In this recipe, we assumed that the 3D structure of the object was known as well as the correspondence between sets of object points and image points. The camera’s intrinsic parameters were also known through calibration. If you look at our projective equation presented at the end of the Digital image formation section of the introduction of this article, this means that we have points for which coordinates (X,Y,Z) and (x,y) are known. We also have the elements of first matrix known (the intrinsic parameters). Only the second matrix is unknown; this is the one that contains the extrinsic parameters of the camera that is the camera/object pose information. Our objective is to recover these unknown parameters from the observation of 3D scene points. This problem is known as the Perspective-n-Point problem or PnP problem. Rotation has three degrees of freedom (for example, angle of rotation around the three axes) and translation also has three degrees of freedom. We therefore have a total of 6 unknowns. For each object point/image point correspondence, the projective equation gives us three algebraic equations but since the projective equation is up to a scale factor, we only have 2 independent equations. A minimum of three points is therefore required to solve this system of equations. Obviously, more points provide a more reliable estimate. In practice, many different algorithms have been proposed to solve this problem and OpenCV proposes a number of different implementation in its cv::solvePnP function. The default method consists in optimizing what is called the reprojection error. Minimizing this type of error is considered to be the best strategy to get accurate 3D information from camera images. In our problem, it corresponds to finding the optimal camera position that minimizes the 2D distance between the projected 3D points (as obtained by applying the projective equation) and the observed image points given as input. Note that OpenCV also has a cv::solvePnPRansac function. As the name suggest, this function uses the RANSAC algorithm in order to solve the PnP problem. This means that some of the object points/image points correspondences may be wrong and the function will returns which ones have been identified as outliers. This is very useful when these correspondences have been obtained through an automatic process that can fail for some points. There‘s more... When working with 3D information, it is often difficult to validate the solutions obtained. To this end, OpenCV offers a simple yet powerful visualization module that facilitates the development and debugging of 3D vision algorithms. It allows inserting points, lines, cameras, and other objects in a virtual 3D environment that you can interactively visualize from various points of views. cv::Viz, a 3D Visualizer module cv::Viz is an extra module of the OpenCV library that is built on top of the VTK open source library. This Visualization Tooolkit (VTK) is a powerful framework used for 3D computer graphics. With cv::viz, you create a 3D virtual environment to which you can add a variety of objects. A visualization window is created that displays the environment from a given point of view. You saw in this recipe an example of what can be displayed in a cv::viz window. This window responds to mouse events that are used to navigate inside the environment (through rotations and translations). This section describes the basic use of the cv::viz module. The first thing to do is to create the visualization window. Here we use a white background: // Create a viz window cv::viz::Viz3d visualizer(“Viz window“); visualizer.setBackgroundColor(cv::viz::Color::white()); Next, you create your virtual objects and insert them into the scene. There is a variety of predefined objects. One of them is particularly useful for us; it is the one that creates a virtual pin-hole camera: // Create a virtual camera cv::viz::WCameraPosition cam(cMatrix, // matrix of intrinsics image, // image displayed on the plane 30.0, // scale factor cv::viz::Color::black()); // Add the virtual camera to the environment visualizer.showWidget(“Camera“, cam); The cMatrix variable is a cv::Matx33d (that is,a cv::Matx<double,3,3>) instance containing the intrinsic camera parameters as obtained from calibration. By default this camera is inserted at the origin of the coordinate system. To represent the bench, we used two rectangular cuboid objects. // Create a virtual bench from cuboids cv::viz::WCube plane1(cv::Point3f(0.0, 45.0, 0.0), cv::Point3f(242.5, 21.0, -9.0), true, // show wire frame cv::viz::Color::blue()); plane1.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); cv::viz::WCube plane2(cv::Point3f(0.0, 9.0, -9.0), cv::Point3f(242.5, 0.0, 44.5), true, // show wire frame cv::viz::Color::blue()); plane2.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); // Add the virtual objects to the environment visualizer.showWidget(“top“, plane1); visualizer.showWidget(“bottom“, plane2); This virtual bench is also added at the origin; it then needs to be moved at its camera-centric position as found from our cv::solvePnP function. It is the responsibility of the setWidgetPose method to perform this operation. This one simply applies the rotation and translation components of the estimated motion. cv::Mat rotation; // convert vector-3 rotation // to a 3x3 rotation matrix cv::Rodrigues(rvec, rotation); // Move the bench cv::Affine3d pose(rotation, tvec); visualizer.setWidgetPose(“top“, pose); visualizer.setWidgetPose(“bottom“, pose); The final step is to create a loop that keeps displaying the visualization window. The 1ms pause is there to listen to mouse events. // visualization loop while(cv::waitKey(100)==-1 && !visualizer.wasStopped()) { visualizer.spinOnce(1, // pause 1ms true); // redraw } This loop will stop when the visualization window is closed or when a key is pressed over an OpenCV image window. Try to apply inside this loop some motion on an object (using setWidgetPose); this is how animation can be created. See also Model-based object pose in 25 lines of code by D. DeMenthon and L. S. Davis, in European Conference on Computer Vision, 1992, pp.335–343 is a famous method for recovering camera pose from scene points. Summary This article teaches us how, under specific conditions, the 3D structure of the scene and the 3D pose of the cameras that captured it can be recovered. We have seen how a good understanding of projective geometry concepts allows to devise methods enabling 3D reconstruction. Resources for Article: Further resources on this subject: OpenCV: Image Processing using Morphological Filters [article] Learn computer vision applications in Open CV [article] Cardboard is Virtual Reality for Everyone [article]
Read more
  • 0
  • 0
  • 6864

article-image-measures-and-measure-groups-microsoft-analysis-services-part-1
Packt
15 Oct 2009
12 min read
Save for later

Measures and Measure Groups in Microsoft Analysis Services: Part 1

Packt
15 Oct 2009
12 min read
In this two-part article by Chris Webb, we will look at measures and measure groups, ways to control how measures aggregate up, and how dimensions can be related to measure groups. In this part, will cover useful properties of measures, along with built-in measure aggregation types and dimension calculations. Measures and aggregation Measures are the numeric values that our users want to aggregate, slice, dice and otherwise analyze, and as a result, it's important to make sure they behave the way we want them to. One of the fundamental reasons for using Analysis Services is that, unlike a relational database it allows us to build into our cube design business rules about measures: how they should be formatted, how they should aggregate up, how they interact with specific dimensions and so on. It's therefore no surprise that we'll spend a lot of our cube development time thinking about measures. Useful properties of Measures Apart from the AggregateFunction property of a measure, which we'll come to next, there are two other important properties we'll want to set on a measure, once we've created it. Format string The Format String property of a measure specifies how the raw value of the measure gets formatted when it's displayed in query results. Almost all client tools will display the formatted value of a measure, and this allows us to ensure consistent formatting of a measure across all applications that display data from our cube. A notable exception is Excel 2003 and earlier versions, which can only display raw measure values and not formatted values. Excel 2007 will display properly formatted measure values in most cases, but not all. For instance, it ignores the fourth section of the Format String which controls formatting for nulls. Reporting Services can display formatted values in reports, but doesn't by default; this blog entry describes how you can make it do so:  http://tinyurl.com/gregformatstring. There are a number of built-in formats that you can choose from, and you can also build your own by using syntax very similar to the one used by Visual BASIC for Applications (VBA) for number formatting. The Books Online topic FORMAT_STRING Contents gives a complete description of the syntax used. Here are some points to bear in mind when setting the Format String property: If you're working with percentage values, using the % symbol will display your values multiplied by one hundred and add a percentage sign to the end. Note that only the display value gets multiplied by hundred—the real value of the measure will not be, so although your user might see a value of 98% the actual value of the cell would be 0.98. If you have a measure that returns null values in some circumstances and you want your users to see something other than null, don't try to use a MDX calculation to replace the nulls—this will cause severe query performance problems. You can use the fourth section of the Format String property to do this instead—for example, the following: #,#.00;#,#.00;0;NA will display the string NA for null values, while keeping the actual cell value as null without affecting performance. Be careful while using the Currency built-in format: it will format values with the currency symbol for the locale specified in the Language property of the cube. This combination of the Currency format and the Language property is frequently recommended for formatting measures that contain monetary values, but setting this property will also affect the way number formats are displayed: for example, in the UK and the USA, the comma is used as a thousands separator, but in continental Europe it is used as a decimal separator. As a result, if you wanted to display a currency value to a user in a locale that didn't use that currency, then you could end up with confusing results. The value €100,101 would be interpreted as a value just over one hundred Euros to a user in France, but in the UK, it would be interpreted as a value of just over one hundred thousand Euros. You can use the desired currency symbol in a Format String instead, for example '$#,#.00', but this will not have an effect on the thousands and decimal separators used, which will always correspond to the Language setting. You can find an example of how to change the language property using a scoped assignment in the MDX Script here: http://tinyurl.com/gregformatstring. Similarly, while Analysis Services 2008 supports the translation of captions and member names for users in different locales, unlike in previous versions, it will not translate the number formats used. As a result, if your cube might be used by users in different locales you need to ensure they understand whichever number format the cube is using. Display folders Many cubes have a lot of measures on them, and as with dimension hierarchies, it's possible to group measures together into folders to make it easier for your users to find the one they want. Most, but not all, client tools support display folders, so it may be worth checking whether the one you intend to use does. By default each measure group in a cube will have its own folder containing all of the measures on the measure group; these top level measure group folders cannot be removed and it's not possible to make a measure from one measure group appear in a folder under another measure group. By entering a folder name in a measure's Display Folder property, you'll make the measure appear in a folder underneath its measure group with that name; if there isn't already a folder with that name, then one will be created, and folder names are case-sensitive. You can make a measure appear under multiple folders by entering a semi-colon delimited list of names as follows: Folder One; Folder Two. You can also create a folder hierarchy by entering either a forward-slash / or back-slash delimited list (the documentation contradicts itself on which is meant to be used—most client tools that support display folders support both) of folder names as follows: Folder One; Folder TwoFolder Three. Calculated measures defined in the MDX Script can also be associated with a measure group, through the Associated_Measure_Group property, and with a display folder through the Display_Folder property. These properties can be set either in code or in Form View in the Calculations tab in the Cube Editor: If you don't associate a calculated measure with a measure group, but do put it in a folder, the folder will appear at the same level as the folders created for each measure group. Built-in measure aggregation types The most important property of a measure is AggregateFunction; it controls how the measure aggregates up through each hierarchy in the cube. When you run an MDX query, you can think of it as being similar to a SQL SELECT statement with a GROUP BY clause—but whereas in SQL you have to specify an aggregate function to control how each column's values get aggregated, in MDX you specify this for each measure when the cube is designed. Basic aggregation types Anyone with a passing knowledge of SQL will understand the four basic aggregation types available when setting the AggregateFunction property: Sum is the commonest aggregation type to use, probably the one you'll use for 90% of all the measures. It means that the values for this measure will be summed up. Count is another commonly used property value and aggregates either by counting the overall number of rows from the fact table that the measure group is built from (when the Binding Type property, found on the Measure Source dialog that appears when you click on the ellipses button next to the Source property of a measure, is set to Row Binding), or by counting non-null values from a specific measure column (when Binding Type property is set to Column Binding). Min and Max return the minimum and maximum measure values. There isn't a built-in Average aggregation type—as we'll soon see, AverageOfChildren does not do a simple average—but it's very easy to create a calculated measure that returns an average by dividing a measure with AggregateFunction Sum by one with AggregateFunction Count, for example: CREATE MEMBER CURRENTCUBE.[Measures].[Average Measure Example] ASIIF([Measures].[Count Measure]=0, NULL,[Measures].[Sum Measure]/[Measures].[Count Measure]); Distinct Count The DistinctCount aggregation type counts the number of distinct values in a column in your fact table, similar to a Count(Distinct) in SQL. It's generally used in scenarios where you're counting some kind of key, for example, finding the number of unique Customers who bought a particular product in a given time period. This is, by its very nature, an expensive operation for Analysis Services and queries that use DistinctCount measures can perform worse than those which use additive measures. It is possible to get distinct count values using MDX calculations but this almost always performs worse; it is also possible to use many-to-many dimensions to get the same results and this may perform better in some circumstances; see the section on "Distinct Count" in the "Many to Many Revolution" white paper, available at http://tinyurl.com/m2mrev. When you create a new distinct count measure, BIDS will create a new measure group to hold it automatically. Each distinct count measure needs to be put into its own measure group for query performance reasons, and although it is possible to override BIDS and create a distinct count measure in an existing measure group with measures that have other aggregation types, we strongly recommend that you do not do this. None The None aggregation type simply means that no aggregation takes place on the measure at all. Although it might seem that a measure with this aggregation type displays no values at all, that's not true: it only contains values at the lowest possible granularity in the cube, at the intersection of the key attributes of all the dimensions. It's very rarely used, and only makes sense for values such as prices that should never be aggregated. If you ever find that your cube seems to contain no data even though it has processed successfully, check to see if you have accidentally deleted the Calculate statement from the beginning of your MDX Script. Without this statement, no aggregation will take place within the cube and you'll only see data at the intersection of the leaves of every dimension, as if every measure had AggregateFunction None. Semi-additive aggregation types The semi-additive aggregation types are: AverageOfChildren FirstChild LastChild FirstNonEmpty LastNonEmpty They behave the same as measures with aggregation type Sum on all dimensions except Time dimensions. In order to get Analysis Services to recognize a Time dimension, you'll need to have set the dimension's Type property to Time in the Dimension Editor. Sometimes you'll have multiple, role-playing Time dimensions in a cube, and if you have semi-additive measures, they'll be semi-additive for just one of these Time dimensions. In this situation, Analysis Services 2008 RTM uses the first Time dimension in the cube that has a relationship with the measure group containing the semi-additive measure. You can control the order of dimensions in the cube by dragging and dropping them in the Dimensions pane in the bottom left-hand corner of the Cube Structure tab of the Cube Editor; the following blog entry describes how to do this in more detail: http://tinyurl.com/gregsemiadd. However, this behavior has changed between versions in the past and may change again in the future. Semi-additive aggregation is extremely useful when you have a fact table that contains snapshot data. For example, if you had a fact table containing information on the number of items in stock in a warehouse, then it would never make sense to aggregate these measures over time: if you had ten widgets in stock on January 1, eleven in stock on January 2, eight on January 3 and so on, the value you would want to display for the whole of January would never be the sum of the number of items in stock on each day in January. The value you do display depends on your organization's business rules. Let's take a look at what each of the semi-additive measure values actually do: AverageOfChildren displays the average of all the values at the lowest level of granularity on the Time dimension. So, for example, if Date was the lowest level of granularity, when looking at a Year value, then Analysis Services would display the average value for all days in the year. FirstChild displays the value of the first time period at the lowest level of granularity, for example, the first day of the year. LastChild displays the value of the last time period at the lowest level of granularity, for example, the last day of the year. FirstNonEmpty displays the value of the first time period at the lowest level of granularity that is not empty, for example the first day of the year that has a value. LastNonEmpty displays the value of the last time period at the lowest level of granularity that is not empty, for example the last day of the year that has a value. This is the most commonly used semi-additive type; a good example of its use would be where the measure group contains data about stock levels in a warehouse, so when you aggregated along the Time dimension what you'd want to see is the amount of stock you had at the end of the current time period. The following screenshot of an Excel pivot table illustrates how each of these semi-additive aggregation types works: Note that the semi-additive measures only have an effect above the lowest level of granularity on a Time dimension. For dates like July 17th in the screenshot above, where there is no data for the Sum measure, the LastNonEmpty measure still returns null and not the value of the last non-empty date.
Read more
  • 0
  • 0
  • 6863

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 6854
article-image-getting-started-beaglebone
Packt
27 Sep 2016
36 min read
Save for later

Getting Started with BeagleBone

Packt
27 Sep 2016
36 min read
In this article by Jayakarthigeyan Prabakar, authors of BeagleBone By Example, we will discuss steps to get started with your BeagleBone board to build real-time physical computing systems using your BeagleBone board and Python programming language. This article will teach you how to set up your BeagleBone board for the first time and write your first few python codes on it. (For more resources related to this topic, see here.) By end of this article, you would have learned the basics of interfacing electronics to BeagleBone boards and coding it using Python which will allow you to build almost anything from a home automation system to a robot through examples given in this article. Firstly, in this article, you will learn how to set up your BeagleBone board for the first time with a new operating system, followed by usage of some basic Linux Shell commands that will help you out while we work on the Shell Terminal to write and execute python codes and do much more like installing different libraries and software on your BeagleBone board. Once you get familiar with usage of the Linux terminal, you will write your first code on python that will run on your BeagleBone board. Most of the time, we will using the freely available open-source codes and libraries available on the Internet to write programs on top of it and using it to make the program work for our requirement instead of entirely writing a code from scratch to build our embedded systems using BeagleBone board. The contents of this article are divided into: Prerequisites About the single board computer - BeagleBone board Know your BeagleBone board Setting up your BeagleBone board Working on Linux Shell Coding on Python in BeagleBone Black Prerequisites This topic will cover what parts you need to get started with BeagleBone Black. You can buy them online or pick them up from any electronics store that is available in your locality. The following is the list of materials needed to get started: 1x BeagleBone Black 1x miniUSB type B to type A cable 1x microSD Card (4 GB or More) 1x microSD Card Reader 1x 5V DC, 2A Power Supply 1x Ethernet Cable There are different variants of BeagleBone boards like BeagleBone, BeagleBone Black, BeagleBone Green and some more old variants. This articlewill mostly have the BeagleBone Black shown in the pictures. Note that BeagleBone Black can replace any of the other BeagleBone boards such as the BeagleBone or BeagleBone Green for most of the projects. These boards have their own extra features so to say. For example, the BeagleBone Black has more RAM, it has almost double the size of RAM available in BeagleBone and an in-built eMMC to store operating system instead of booting it up only through operating system installed on microSD card in BeagleBone White. Keeping in mind that this articleshould be able to guide people with most of the BeagleBone board variants, the tutorials in this articlewill be based on operating system booted from microSD card inserted on the BeagleBone board. We will discuss about this in detail in the Setting up your BeagleBone board and installing operating system's topics of this article. BeagleBone Black – a single board computer This topic will give you brief information about single board computers to make you understand what they are and where BeagleBone boards fit inside this category. Have you ever wondered how your smartphones, smart TVs, and set-top boxes work? All these devices run custom firmware developed for specific applications based on the different Linux and Unix kernels. When you hear the word Linux and if you are familiar with Linux, you will get in your mind that it's nothing but an operating system, just like Windows or Mac OS X that runs on desktops and server computers. But in the recent years the Linux kernel is being used in most of the embedded systems including consumer electronics such as your Smartphones, smart TVs, set-top boxes, and much more. Most people know android and iOS as an operating system on their smart phones. But only a few know that, both these operating systems are based on Linux and Unix kernels. Did you ever question how they would develop such devices? There should be a development board right? What are they? This is where Linux Development boards like our BeagleBone boards are used. By adding peripherals such as touch screens, GSM modules, microphones, and speakers to these single board computers and writing the software that is the operating system with graphical user interface to make them interact with the physical world, we have so many smart devices now that people use every day. Nowadays you have proximity sensors, accelerometers, gyroscopes, cameras, IR blasters, and much more on your smartphones. These sensors and transmitters are connected to the CPU on your phone through the Input Output ports on the CPU, and there is a small piece of software that is running to communicate with these electronics when the whole operating system is running in the Smartphone to get the readings from these sensors in real-time. Like the autorotation of screen on the latest smartphones. There is a small piece of software that is reading the data from accelerometer and gyroscope sensors on the phone and based on the orientation of the phone it turns the graphical display. So, all these Linux Development boards are tools and base boards using which you can build awesome real world smart devices or we can call them physical computing systems as they interact with the physical world and respond with an output. Getting to know your board – BeagleBone Black BeagleBone Black can be described as low cost single board computer that can be plugged into a computer monitor or TV via a HDMI cable for output and uses standard keyboard and mouse for input. It's capable of doing everything you'd expect a desktop computer to do, like playing videos, word processing, browsing the Internet, and so on. You can even setup a web server on your BeagleBone Black just like you would do if you want to set up a webserver on a Linux or Windows PC. But, differing from your desktop computer, the BeagleBone boards has the ability to interact with the physical world using the GPIO pins available on the board, and has been used in various physical computing applications. Starting from Internet of Things, Home Automation projects, to Robotics, or tweeting shark intrusion systems. The BeagleBone boards are being used by hardware makers around the world to build awesome physical computing systems which are turning into commercial products also in the market. OpenROV, an underwater robot being one good example of what someone can build using a BeagleBone Black that can turn into a successful commercial product. Hardware specification of BeagleBone Black A picture is worth a thousand words. The following picture describes about the hardware specifications of the BeagleBone Black. But you will get some more details about every part of the board as you read the content in the following picture. If you are familiar with the basic setup of a computer. You will know that it has a CPU with RAM and Hard Disk. To the CPU you can connect your Keyboard, Mouse, and Monitor which are powered up using a power system. The same setup is here in BeagleBone Black also. There is a 1GHz Processor with 512MB of DDR3 RAM and 4GB on board eMMC storage, which replaces the Hard Disk to store the operating system. Just in case you want more storage to boot up using a different operating system, you can use an external microSD card that can have the operating system that you can insert into the microSD card slot for extra storage. As in every computer, the board consists of a power button to turn on and turn off the board and a reset button to reset the board. In addition, there is a boot button which is used to boot the board when the operating system is loaded on the microSD card instead of the eMMC. We will be learning about usage of this button in detail in the installing operating systems topic of this article. There is a type A USB Host port to which you can connect peripherals such as USB Keyboard, USB Mouse, USB Camera, and much more, provided that the Linux drivers are available for the peripherals you connect to the BeagleBone Black. It is to be noted that the BeagleBone Black has only one USB Host Port, so you need to get an USB Hub to get multiple USB ports for connecting more number of USB devices at a time. I would recommend using a wireless Keyboard and Mouse to eliminate an extra USB Hub when you connect your BeagleBone Black to monitor using the HDMI port available. The microHDMI port available on the BeagleBone Black gives the board the ability to give output to HDMI monitors and HDMI TVs just like any computer will give. You can power up the BeagleBone Black using the DC Barrel jack available on the left hand side corner of the board using a 5V DC, 2A adapter. There is an option to power the board using USB, although it is not recommended due to the current limit on USB ports. There are 4 LEDs on board to indicate the status of the board and help us for identifications to boot up the BeagleBone Black from microSD card. The LEDs are linked with the GPIO pins on the BeagleBone Black which can be used whenever needed. You can connect the BeagleBone Black to the LAN or Internet using the Ethernet port available on the board using an Ethernet cable. You can even use a USB Wi-Fi module to give Internet access to your BeagleBone Black. The expansion headers which are in general called the General Purpose Input Output (GPIO) pins include 65 digital pins. These pins can be used as digital input or output pins to which you can connect switches, LEDs and many more digital input output components, 7 analog inputs to which you can connect analog sensors like a potentiometer or an analog temperature sensor, 4 Serial Ports to which you can connect Serial Bluetooth or Xbee Modules for wireless communication or anything else, 2 SPI and 2 I2C Ports to connect different modules such as sensors or any other modules using SPI or I2C communication. We also have the serial debugging port to view the low-level firmware pre-boot and post-shutdown/reboot messages via a serial monitor using an external serial to USB converter while the system is loading up and running. After booting up the operating system, this also acts as a fully interactive Linux console. Setting up your BeagleBone board Your first step to get started with BeagleBone Boards with your hands on will be to set it up and test it as suggested by the BeagleBone Community with the Debian distribution of Linux running on BeagleBone Black that comes preloaded on the eMMC on board. This section will walk you through that process followed by installing different operating system on your BeagleBone board and log in into it. And then get into start working with files and executing Linux Shell commands via SSH. Connect your BeagleBone Black using the USB cable to your Laptop or PC. This is the simplest method to get your BeagleBone Black up and running. Once you connect your BeagleBone Black, it will start to boot using the operating system on the eMMC storage. To log in into the operating system and start working on it, the BeagleBone Black has to connect to a network and the drivers that are provided by the BeagleBoard manufacturers allow us to create a local network between your BeagleBone Black and your computer when you connect it via the USB cable. For this, you need to download and install the device drivers provided by BeagleBone board makers on your PC as explained in step 2. Download and install device drivers. Goto http://beagleboard.org/getting-started Click and download the driver package based on your operating system. Mine is Windows (64-bit), so I am going to download that Once the installation is complete, click on Finish. It is shown in the following screenshot: Once the installation is done, restart your PC. Make sure that the Wi-Fi on your laptop is off and also there is no Ethernet connected to your Laptop. Because now the BeagleBone Black device drivers will try to create a LAN connection between you laptop and BeagleBone Black so that you can access the webserver running by default on the BeagleBone Black to test it's all good, up, and running. Once you reboot your PC, get to step 3. Connect to the Web Server Running on BeagleBone Black. Open your favorite web browser and enter the IP address 192.168.7.2 on the URL bar, which is the default static IP assigned to BeagleBone Black.This should open up the webpage as shown in the following screenshot: If you get a green tick mark with the message your board is connected. You can make sure that you got the previous steps correct and you have successfully connected to your board. If you don't get this message, try removing the USB cable connected to the BeagleBone Black, reconnect it and check again. If you still don't get it. Then check whether you did the first two steps correctly. Play with on board LEDs via the web server. If you Scroll down on the web page to which we got connected, you will find the section as shown in the following screenshot: This is a sample setup made by BeagleBone makers as the first time interaction interface to make you understand what is possible using BeagleBone Black. In this section of the webpage, you can run a small script. When you click on Run, the On board status LEDs that are flashing depending on the status of the operating system will stop its function and start working based on the script that you see on the page. The code is running based on a JavaScript library built by BeagleBone makers called the BoneScript. We will not look into this in detail as we will be concentrating more on writing our own programs using python to work with GPIOs on the board. But to make you understand, here is a simple explanation on what is there on the script and what happens when you click on the run button on the webpage. The pinMode function defines the on board LED pins as outputs and the digitalWrite function sets the state of the output either as HIGH or LOW. And the setTimeout function will restore the LEDs back to its normal function after the set timeout, that is, the program will stop running after the time that was set in the setTimeout function. Say I modify the code to what is shown in the following screenshot: You can notice that, I have changed the states of two LEDs to LOW and other two are HIGH and the timeout is set to 10,000 milliseconds. So when you click on the Run Button. The LEDs will switch to these states and stay like that for 10 seconds and then restore back to its normal status indication routine, that is, blinking. You can play around with different combinations of HIGH and LOW states and setTimeout values so that you can see and understand what is happening. You can see the LED output state of BeagleBone Black in the following screenshot for the program we executed earlier: You can see that the two LEDs in the middle are in LOW state. It stays like this for 10 seconds when you run the script and then it will restore back to its usual routine. You can try with different timeout values and states of LEDs on the script given in the webpage and try clicking on the run button to see how it works. Like this we will be writing our own python programs and setting up servers to use the GPIOs available on the BeagleBone Black to make them work the way we desire to build different applications in each project that is available in this article. Installing operating systems We can make the BeagleBone Black boot up and run using different operating systems just like any computer can do. Mostly Linux is used on these boards which is free and open source, but it is to be noted that specific distributions of Linux, Android, and Windows CE have been made available for these boards as well which you can try out. The stable versions of these operating systems are made available at http://beagleboard.org/latest-images. By default, the BeagleBone Black comes preloaded with a Debian distribution of Linux on the eMMC of the board. However, if you want, you can flash this eMMC just like you do to your Hard Drive on your computer and install different operating systems on it. As mentioned in the note at the beginning of this article, considering all the tutorials in this articleshould be useful to people who own BeagleBone as well as the BeagleBone Black. We will choose the recommended Debian package by www.BeagleBoard.org Foundation and we will boot the board every time using the operating system on microSD card. Perform the following steps to prepare the microSD card and boot BeagleBone using that: Goto: http://beagleboard.org/latest-images. Download the latest Debian Image. The following screenshot highlights the latest Debian Image available for flashing on microSD card: Extract the image file inside the RAR file that was downloaded: You might have to install WinRAR or any .rar file extracting software if it is not available in your computer already. Install Win32 Disk Imager software. To write the image file to a microSD card, we need this software. You can go to Google or any other search engine and type win32 disk imager as keyword and search to get the web link to download this software as shown in the following screenshot: The web link, where you can find this software is http://sourceforge.net/projects/win32diskimager/. But this keeps changing often that's why I suggest you can search it via any search engine with the keyword. Once you download the software and install it. You should be able to see the window as shown in the following screenshot when you open the Win32 Disk Imager: Now that you are all set with the software, using which you can flash the operating system image that we downloaded. Let's move to the next step where you can use Win32 Disk Imager software to flash the microSD card. Flashing the microSD card. Insert the microSD into a microSD card reader and plug it onto your computer. It might take some time for the card reader to show up your microSD card. Once it shows up, you should be able to select the USB drive as shown in the following screenshot on the Win32 Disk Imager software. Now, click on the icon highlighted in the following screenshot to open the file explorer and select the image file that we extracted in Step 3: Go to the folder where you extracted the latest Debian image file and select it. Now you can write the image file to microSD card by clicking on the Write button on the Win32 Disk Imager. If you get a prompt as shown in the following screenshot, click on Yes and continue: Once you click on Yes, the flashing process will start and the image file will be written on to the microSD card. The following screenshot shows the flashing process progressing: Once the flashing is completed, you will get a message as shown in the following screenshot: Now you can click onOKexit the Win32 Disk Imager software and safely remove the microSD card from your computer. Now you have successfully prepared your microSD card with the latest Debian operating system available for BeagleBone Black. This process is same for all other operating systems that are available for BeagleBone boards. You can try out different operating systems such as the Angstrom Linux, Android, or Windows CE others, once you get familiar with your BeagleBone board by end of this article. For Mac users, you can refer to either https://learn.adafruit.com/ssh-to-beaglebone-black-over-usb/installing-drivers-mac or https://learn.adafruit.com/beaglebone-black-installing-operating-systems/mac-os-x. Booting your BeagleBone board from a SD card Since you have the operating system on your microSD card now, let us go ahead and boot your BeagleBone board from that microSD card and see how to login and access the filesystem via Linux Shell. You will need your computer connected to your Router either via Ethernet or Wi-Fi and an Ethernet cable which you should connect between your Router and the BeagleBone board. The last but most important thing is an External Power Supply using which you will power up your BeagleBone board because power supply via a USB will be not be enough to run the BeagleBone board when it is booted from a microSD card. Insert the microSD card into BeagleBone board. Now you should insert the microSD card that you have prepared into the microSD card slot available on your BeagleBone board. Connect your BeagleBone to your LAN. Now connect your BeagleBone board to your Internet router using an Ethernet cable. You need to make sure that your BeagleBone board and your computer are connected to the same router to follow the next steps. Connect external power supply to your BeagleBone board. Boot your BeagleBone board from microSD card. On BeagleBone Black and BeagleBone Green, you have a Boot Button which you need to hold on while turning on your BeagleBone board so that it starts booting from the microSD card instead of the default mode where it starts to boot from the onboard eMMC storage which holds the operating system. In case of BeagleBone White, you don't have this button, it starts to boot from the microSD card itself as it doesn't have onboard eMMC storage. Depending on the board that you have, you can decide whether to boot the board from microSD card or eMMC. Consider you have a BeagleBone Black just like the one I have shown in the preceding picture. You hold down the User Boot button that is highlighted on the image and turn on the power supply. Once you turn on the board while holding the button down, the four on-board LEDs will light up and stay HIGH as shown in the following picture for 1 or 2 seconds, then they will start to blink randomly. Once they start blinking, you can leave the button. Now your BeagleBone board must have started Booting from the microSD card, so our next step will be to log in to the system and start working on it. The next topic will walk you through the steps on how to do this. Logging into the board via SSH over Ethernet If you are familiar with Linux operations, then you might have guessed what this section is about. But for those people who are not daily Linux users or have never heard the term SSH, Secure Shell (SSH) is a network protocol that allows network services and remote login to be able to operate over an unsecured network in a secure manner. In basic terms, it's a protocol through which you can log in to a computer and assess its filesystem and also work on that system using specific commands to create and work with files on the system. In the steps ahead, you will work with some Linux commands that will make you understand this method of logging into a system and working on it. Setup SSH Software. To get started, log in to your BeagleBone board now, from a Windows PC, you need to install any SSH terminal software for windows. My favorite is PuTTY, so I will be using that in the steps ahead. If you are new to using SSH, I would suggest you also get PuTTY. The software interface of PuTTY will be as shown in the following screenshot: You need to know the IP address or the Host Name of your BeagleBone Black to log in to it via SSH. The default Host Name is beaglebone but in some routers, depending on their security settings, this method of login doesn't work with Host Name. So, I would suggest you try to login entering the hostname first. If you are not able to login, follow Step 2. If you successfully connect and get the login prompt with Host Name, you can skip Step 2 and go to Step 3. But if you get an error as shown in the following screenshot, perform Step 2. Find an IP address assigned to BeagleBone board. Whenever you connect a device to your Router, say your computer, printer, or mobile phone. The router assigns a unique IP to these devices. The same way, the router must have assigned an IP to your BeagleBone board also. We can get this detail on the router's configuration page of your router from any browser of a computer that is connected to that router. In most cases, the router can be assessed by entering the IP 192.168.1.1 but some router manufacturers have a different IP in very rare cases. If you are not able to assess your router using this IP 192.168.1.1, refer your router manual for getting access to this page. The images that are shown in this section are to give you an idea about how to log in to your router and get the IP address details assigned to your BeagleBone board from your router. The configuration pages and how the devices are shown on the router will look different depending on the router that you own. Enter the 192.168.1.1 address in you browser. When it asks for User Name and Password, enter admin as UserName and password as Password These are the mostly used credentials by default in most of the routers. Just in case you fail in this step, check your router's user manual. Considering you logged into your router configuration page successfully, you will see the screen with details as shown in the following screenshot: If you click on the Highlighted part, Attached Devices, you will be able to see the list of devices with their IP as shown in the following screenshot, where you can find the details of the IP address that is assigned to your BeagleBone board. So now you can note down the IP that has been assigned to your BeagleBone board. It can be seen that it's 192.168.1.14 in the preceding screenshot for my beaglebone board. We will be using this IP address to connect to the BeagleBone board via SSH in the next step. Connect via SSH using IP Address. Once you click on Open you might get a security prompt as shown in the following screenshot. Click on Yes and continue. Now you will get the login prompt on the terminal screen as shown in the following screenshot: If you got this far successfully, then it is time to log in to your BeagleBone board and start working on it via Linux Shell. Log in to BeagleBone board. When you get the login prompt as shown in the preceding screenshot, you need to enter the default username which is debian and default password which is temppwd. Now you should have logged into Linux Shell of your BeagleBone board as user with username debian. Now that you have successfully logged into your BeagleBone board's Linux Shell, you can start working on it using Linux Shell commands like anyone does on any computer that is running Linux. The next section will walk you through some basic Linux Shell commands that will come in handy for you to work with any Linux system. Working on Linux shell Simply put, the shell is a program that takes commands from the keyboard and gives them to the operating system to perform. Since we will be working on BeagleBone board as a development board to build electronics projects, plugging it to a Monitor, Keyboard, and Mouse every time to work on it like a computer might be unwanted most of the times and you might need more resources also which is unnecessary all the time. So we will be using the shell command-line interface to work on the BeagleBone boards. If you want to learn more about the Linux command-line interfaces, I would suggest you visit to http://linuxcommand.org/. Now let's go ahead and try some basic shell commands and do something on your BeagleBone board. You can see the kernel version using the command uname -r. Just type the command and hit enter on your keyboard, the command will get executed and you will see the output as shown here: Next, let us check the date on your BeagleBone board: Like this shell will execute your commands and you can work on your BeagleBone boards via the shell. Getting kernel version and date was just for a sample test. Now let's move ahead and start working with the filesystem. ls: This stands for list command. This command will list out and display the names of folders and files available in the current working directory on which you are working. pwd: This stands for print working directory command. This command prints the current working directory in which you are present. mkdir: This stands for make directory command. This command will create a directory in other words a folder, but you need to mention the name of the directory you want to create in the command followed by it. Say I want to create a folder with the name WorkSpace, I should enter the command as follows: When you execute this command, it will create a folder named WorkSpace inside the current working directory you are in, to check whether the directory was created. You can try the ls command again and see that the directory named WorkSpace has been created. To change the working directory and go inside the WorkSpace directory, you can use the next command that we will be seeing. cd: This stands for change directory command. This command will help you switch between directories depending on the path you provide along with this command. Now to switch and get inside the WorkSpace directory that you created, you can type the command as follows: cd WorkSpace You can note that whenever you type a command, it executes in the current working that you are in. So the execution of cd WorkSpace now will be equivalent to cd /home/debian/WorkSpace as your current working directory is /home/debian. Now you can see that you have got inside the WorkSpace folder, which is empty right now, so if you type the ls command now, it will just go to the next line on the shell terminal, it will not output anything as the folder is empty. Now if you execute the pwd command, you will see that your current working directory has changed. cat: This stands for the cat command. This is one of the most basic commands that is used to read, write, and append data to files in shell. To create a text file and add some content to it, you just need to type the cat command cat > filename.txt Say I want to create a sample.txt file, I would type the command as shown next: Once you type, the cursor will be waiting for the text you want to type inside the text file you created. Now you can type whatever text you want to type and when you are done press Ctrl + D. It will save the file and get back to the command-line interface. Say I typed This is a test and then pressed Ctrl + D. The shell will look as shown next. Now if you type ls command, you can see the text file inside the WorkSpace directory. If you want to read the contents of the sample.txt file, again you can use the cat command as follows: Alternatively, you can even use the more command which we will be using mostly: Now that we saw how we can create a file, let's see how to delete what we created. rm: This stands for remove command. This will let you delete any file by typing the filename or filename along with path followed by the command. Say now we want to delete the sample.txt file we created, the command can be either rm sample.txt which will be equivalent to rm /home/debian/WorkSpace/sample.txt as your current working directory is /home/debian/Workspace. After you execute this command, if you try to list the contents of the directory, you will notice that the file has been deleted and now the directory is empty. Like this, you can make use of the shell commands work on your BeagleBone board via SSH over Ethernet or Wi-Fi. Now that you have got a clear idea and hands-on experience on using the Linux Shell, let's go ahead and start working with python and write a sample program on a text editor on Linux and test it in the next and last topic of this article. Writing your own Python program on BeagleBone board In this section, we will write our first few Python codes on your BeagleBone board. That will take an input and make a decision based on the input and print out an output depending on the code that we write. There are three sample codes in this topic as examples, which will help you cover some fundamentals of any programming language, including defining variables, using arithmetic operators, taking input and printing output, loops and decision making algorithm. Before we write and execute a python program, let us get into python's interactive shell interface via the Linux shell and try some basic things like creating a variable and performing math operations on those variables. To open the python shell interface, you just have the type python on the Linux shell like you did for any Linux shell command in the previous section of this article. Once you type python and hit Enter, you should be able to see the terminal as shown in the following screenshot: Now you are into python's interactive shell interface where every line that you type is the code that you are writing and executing simultaneously in every step. To learn more about this, visit https://www.python.org/shell/ or to get started and learn python programming language you can get our python by example articlein our publication. Let's execute a series of syntax in python's interactive shell interface to see whether it's working. Let's create a variable A and assign value 20 to it: Now let's print A to check what value it is assigned: You can see that it prints out the value that we stored on it. Now let's create another variable named B and store value 30 to it: Let's try adding these two variables and store the result in another variable named C. Then print C where you can see the result of A+B, that is, 50. That is the very basic calculation we performed on a programming interface. We created two variables with different values and then performed an arithmetic operation of adding two values on those variables and printed out the result. Now, let's get a little more ahead and store string of characters in a variable and print them. Wasn't that simple. Like this you can play around with python's interactive shell interface to learn coding. But any programmer would like to write a code and execute the program to get the output on a single command right. Let's see how that can be done now. To get out of the Python's Interactive Shell and get back to the current working directory on Linux Shell, just hold the Ctrl button and press D, that is, Ctrl + D on the keyboard. You will be back on the Linux Shell interface as shown next: Now let's go ahead and write the program to perform the same action that we tried executing on python's interactive shell. That is to store two values on different variables and print out the result when both of them are added. Let's add some spice to it by doing multiple arithmetic operations on the variables that we create and print out the values of addition and subtraction. You will need a text editor to write programs and save them. You can do it using the cat command also. But in future when you use indentation and more editing on the programs, the basic cat command usage will be difficult. So, let's start using the available text editor on Linux named nano, which is one of my favorite text editors in Linux. If you have a different choice and you are familiar with other text editors on Linux, such as vi or anything else, you can go ahead and continue the next steps using them to write programs. To create a python file and start writing the code on it using nano, you need to use the nano command followed by the filename with extension .py. Let's create a file named ArithmeticOperations.py. Once you type this command, the text editor will open up. Here you can type your code and save it using the keyboard command Ctrl + X. Let's go ahead and write the code which is shown in the following screenshot and let's save the file using Ctrl + X: Then type Y when it prompts to save the modified file. Then if you want to change the file with a different name, you can change it in the next step before you save it. Since we created the file now only, we don't need to do it. In case if you want to save the modified copy of the file in future with a different name, you can change the filename in this step: For now we will just hit enter that will take us back to the Linux Shell and the file AritmeticOperations.py will be created inside the current working directory, which you can see by typing the ls command. You can also see the contents of the file by typing the more command that we learned in the previous section of this article. Now let's execute the python script and see the output. To do this, you just have to type the command python followed by the python program file that we created, that is, the ArithmeticOperations.py. Once you run the python code that you wrote, you will see the output as shown earlier with the results as output. Now that you have written and executed your first code on python and tested it working on your BeagleBone board, let's write another python code, which is shown in the following screenshot where the code will ask you to enter an input and whatever text you type as input will be printed on the next line and the program will run continuously. Let's save this python code as InputPrinter.py: In this code, we will use a while loop so that the program runs continuously until you break it using the Ctrl + D command where it will break the program with an error and get back to Linux Shell. Now let's try out our third and last program of this section and article, where when we run the code, the program asks the user to type the user's name as input and if they type a specific name that we compare, it prints and says Hello message or if a different name was given as input, it prints go away; let's call this code Say_Hello_To_Boss.py. Instead of my name Jayakarthigeyan, you can replace it with your name or any string of characters on comparing which, the output decision varies. When you execute the code, the output will look as shown in the following screenshot: Like we did in the previous program, you can hit Ctrl + D to stop the program and get back to Linux Shell. In this way, you can work with python programming language to create codes that can run on the BeagleBone boards in the way you desire. Since we have come to the end of this article, let's give a break to our BeagleBone board. Let's power off our BeagleBone board using the command sudo poweroff, which will shut down the operating system. After you execute this command, if you get the error message shown in the following screenshot, it means the BeagleBoard has powered off. You can turn off the power supply that was connected to your BeagleBone board now. Summary So here we are at the end of this article. In this article, you have learned how to boot your BeagleBone board from a different operating system on microSD card and then log in to it and start coding in Python to run a routine and make decisions On an extra note, you can take this article to another level by trying out a little more by connecting your BeagleBone board to an HDMI monitor using a microHDMI cable and connecting a USB Keyboard and Mouse to the USB Host of the BeagleBone board and power the monitor and BeagleBone board using external power supply and boot it from microSD Card and you should be able to see some GUI and you will be able to use the BeagleBone board like a normal Linux computer. You will be able access the files, manage them, and use Shell Terminal on the GUI also. If you own BeagleBone Black or BeagleBone Green, you can try out to flash the onboard eMMC using the latest Debian operating system and try out the same thing that we did using the operating system booted from microSD card. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Learning BeagleBone Python Programming [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 1
  • 6845

article-image-creating-puzzle-app
Packt
06 Sep 2013
11 min read
Save for later

Creating a Puzzle App

Packt
06 Sep 2013
11 min read
(For more resources related to this topic, see here.) A quick introduction to puzzle games Puzzle games are a genre of video games that have been around for decades. These types of games challenge players to use logic and critical thinking to complete patterns. There is a large variety of puzzle games available, and in this article, we'll start by learning how to create a 3-by-3 jigsaw puzzle titled My Jigsaw Puzzle. In My Jigsaw Puzzle, players will have to complete a jigsaw puzzle by using nine puzzle pieces. Each puzzle piece will have an image on it, and the player will have to match the puzzle piece to the puzzle board by dragging the pieces from right to left. When the puzzle piece matches the correct location, the game will lock in the piece. Let's take a look at the final game product. Downloading the starter kit Before creating our puzzle app, you can get the starter kit for the jigsaw puzzle from the code files available with this book. The starter kit includes all of the graphics that we will be using in this article. My Jigsaw Puzzle For the Frank's Fitness app, we used Corona's built-in new project creator to help us with setting up our project. With My Jigsaw Puzzle, we will be creating the project from scratch. Although creating a project from scratch can be more time consuming, the process will introduce you to each element that goes into Corona's new project creator. Creating the project will include creating the build.settings, config.lua, main.lua, menu.lua, and gameplay.lua files. Before we can start creating the files for our project, we will need to create a new project folder on your computer. This folder will hold all of the files that will be used in our app. build.settings The first file that we will create is the build.settings file. This file will handle our device orientation and specify our icons for the iPhone. Inside our build.settings file, we will create one table named settings, which will hold two more tables named orientation and iphone. The orientation table will tell our app to start in landscape mode and to only support landscapeLeft and landscapeRight. The iphone table will specify the icons that we want to use for our app. To create the build.settings file, create a new file named build.settings in your project's folder and input the following code: settings = { orientation = { default = "landscapeRight", supported = { "landscapeLeft", "landscapeRight" }, }, iphone = { plist = { CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-72.png", } } }} config.lua Next, we will be creating a file named config.lua in our project's folder. The config.lua file is used to specify any runtime properties for our app. For My Jigsaw Puzzle, we will be specifying the width, height, and scale methods. We will be using the letterbox scale method, which will uniformly scale content as much as possible. When letterbox doesn't scale to the entire screen, our app will display black borders outside of the playable screen. To create the config.lua file, create a new file named config.lua in your project's folder and input the following code: application ={ content = { width = 320, height = 480, scale = "letterbox" }} main.lua Now that we've configured our project, we will be creating the main.lua file—the start point for every app. For now, we are going to keep the file simple. Our main. lua file will hide the status bar while the app is active and redirect the app to the next file—menu.lua. To create main.lua, create a new file named main.lua in your project's folder and copy the following code into the file: display.setStatusBar ( display.HiddenStatusBar )local storyboard = require ( "storyboard" )storyboard.gotoScene("menu") menu.lua Our next step is to create the menu for our app. The menu will show a background, the game title, and a play button. The player can then tap on the PLAY GAME button to start playing the jigsaw puzzle. To get started, create a new file in your project's folder called menu.lua. Once the file has been created, open menu.lua in your favorite text editor. Let's start the file by getting the widget and storyboard libraries. We'll also set up Storyboard by assigning the variable scene to storyboard.newScene(). local widget = require "widget"local storyboard = require( "storyboard" )local scene = storyboard.newScene() Next, we will set up our createScene() function. The function createScene() is called when entering a scene for the first time. Inside this function, we will create objects that will be displayed on the screen. Most of the following code should look familiar by now. Here, we are creating two image display objects and one widget. Each object will also be inserted into the variable group to let our app know that these objects belong to the scene menu.lua. function scene:createScene( event ) local group = self.view background = display.newImageRect( "woodboard.png", 480, 320 ) background.x = display.contentWidth*0.5 background.y = display.contentHeight*0.5 group:insert(background) logo = display.newImageRect( "logo.png", 400, 54 ) logo.x = display.contentWidth/2 logo.y = 65 group:insert(logo) function onPlayBtnRelease() storyboard.gotoScene("gameplay") end playBtn = widget.newButton{ default="button-play.png", over="button-play.png", width=200, height=83, onRelease = onPlayBtnRelease } playBtn.x = display.contentWidth/2 playBtn.y = display.contentHeight/2 group:insert(playBtn)end After the createScene() function, we will set up the enterScene() function. The enterScene() function is called after a scene has moved on to the screen. In My Jigsaw Puzzle, we will be using this function to remove the gameplay scene. We need to make sure we are removing the gameplay scene so that the jigsaw puzzle is reset and the player can play a new game. function scene:enterScene( event ) storyboard.removeScene( "gameplay" )end After we've created our createScene() and enterScene() functions, we need to set up our event listeners for Storyboard. scene:addEventListener( "createScene", scene )scene:addEventListener( "enterScene", scene ) Finally, we end our menu.lua file by adding the following line: return scene This line of code lets our app know that we are done with this scene. Now that we've added the last line, we have finished editing menu.lua, and we will now start setting up our jigsaw puzzle. gameplay.lua By now, our game has been configured and we have set up two files—main.lua and menu.lua. In our next step, we will be creating the jigsaw puzzle. The following screenshot shows the puzzle that we will be making: Getting local libraries To get started, create a new file called gameplay.lua and open the file in your favorite text editor. Similar to our menu.lua file, we need to start the file by getting in other libraries and setting up Storyboard. local widget = require("widget")local storyboard = require( "storyboard" )local scene = storyboard.newScene() Creating variables After our local libraries, we are going to create some variables to use in gameplay. lua. When you separate the variables from the rest of your code, the process of refining your app later becomes easier. _W = display.contentWidth_H = display.contentHeightpuzzlePiecesCompleted = 0totalPuzzlePieces = 9puzzlePieceWidth = 120puzzlePieceHeight = 120puzzlePieceStartingY = { 80,220,360,500,640,780,920,1060,1200 }puzzlePieceSlideUp = 140puzzleWidth, puzzleHeight = 320, 320puzzlePieces = {}puzzlePieceCheckpoint = { {x=-243,y=76}, {x=-160, y=76}, {x=-76, y=74}, {x=-243,y=177}, {x=-143, y=157}, {x=-57, y=147}, {x=-261,y=258}, {x=-176,y=250}, {x=-74,y=248}}puzzlePieceFinalPosition = { {x=77,y=75}, {x=160, y=75}, {x=244, y=75}, {x=77,y=175}, {x=179, y=158}, {x=265, y=144}, {x=58,y=258}, {x=145,y=251}, {x=248,y=247}} Here's a breakdown of what we will be using each variable for in our app: _W and _H: These variables capture the width and height of the screen. In our app, we have already specified the size of our app to be 480 x 320. puzzlePiecesCompleted: This variable is used to track the progress of the game by tracking the number of puzzle pieces completed. totalPuzzlePieces: This variable allows us to tell our app how many puzzle pieces we are using. puzzlePieceWidth and puzzlePieceHeight: These variables specify the width and height of our puzzle piece images within the app. puzzlePieceStartingY: This table contains the starting Y location of each puzzle piece. Since we can't have all nine puzzle pieces on screen at the same time, we are displaying the first two pieces and the other seven pieces are placed off the screen below the first two. We will be going over this in detail when we add the puzzle pieces. puzzlePieceSlideUp: After a puzzle piece is added, we will slide the puzzle pieces up; this variable sets the sliding distance. puzzleWidth and puzzleHeight: These variables specify the width and height of our puzzle board. puzzlePieces: This creates a table to hold our puzzle pieces once they are added to the board. puzzlePieceCheckpoint: This table sets up the checkpoints for each puzzle piece in x and y coordinates. When a puzzle piece is dragged to the checkpoint, it will be locked into position. When we add the checkpoint logic, we will learn more about this in greater detail. puzzlePieceFinalPosition: This table sets up the final puzzle location in x and y coordinates. This table is only used once the puzzle piece passes the checkpoint. Creating display groups After we have added our variables, we are going to create two display groups to hold our display objects. Display groups are simply a collection of display objects that allow us to manipulate multiple display objects at once. In our app, we will be creating two display groups—playGameGroup and finishGameGroup. playGameGroup will contain objects that are used when the game is being played and the finishGameGroup will contain objects that are used when the puzzle is complete.Insert the following code after the variables: playGameGroup = display.newGroup()finishGameGroup = display.newGroup() The shuffle function Our next task is to create a shuffle function for My Jigsaw Puzzle. This function will randomize the puzzle pieces that are presented on the right side of the screen. Without the shuffle function, our puzzle pieces would be presented in a 1, 2, 3 manner, while the shuffle function makes sure that the player has a new experience every time. Creating the shuffle function To create the shuffle function, we will start by creating a function named shuffle. This function will accept one argument (t) and proceed to randomize the table for us. We're going to be using some advanced topics in this function, but before we start explaining it, let's add the following code to gameplay.lua under our display group: function shuffle(t) local n = #t while n > 2 do local k = math.random(n) t[n] = t[k] t[k] = t[n] n = n - 1 end return tend At first glance, this function may look complex; however, the code gets a lot simpler once it's explained. Here's a line-by-line breakdown of our shuffle function. The local n = #t line introduces two new features—local and #. By using the keyword local in front of our variable name, we are saying that this variable (n) is only needed for the duration of the function or loop that we are in. By using local variables, you are getting the most out of your memory resources and practicing good programming techniques. For more information about local variables, visit www.lua.org/pil/4.2.html. In this line, we are also using the # symbol. This symbol will tell us how many pieces or elements are in a table. In our app, our table will contain nine pieces or elements. Inside the while loop, the very first line is local k = math.random(n). This line is assigning a random number between 1 and the value of n (which is 9 in our app) to the local variable k. Then, we are randomizing the elements of the table by swapping the places of two pieces within our table. Finally, we are using n = n – 1 to work our way backwards through all of the elements in the table. Summary After reading this article, you will have a game that is ready to be played by you and your friends. We learned how to use Corona's feature set to create our first business app. In My Jigsaw Puzzle, we only provided one puzzle for the player, and although it's a great puzzle, I suggest adding more puzzles to make the game more appealing to more players. Resources for Article: Further resources on this subject: Atmosfall – Managing Game Progress with Coroutines [Article] Creating and configuring a basic mobile application [Article] Defining the Application's Policy File [Article]
Read more
  • 0
  • 0
  • 6843

article-image-load-balancing-mssql
Packt
18 Apr 2014
5 min read
Save for later

Load balancing MSSQL

Packt
18 Apr 2014
5 min read
NetScaler is the only certified load balancer that can load balance the MySQL and MSSQL services. It can be quite complex and there are many requirements that need to be in place in order to set up a proper load-balanced SQL server. Let us go through how to set up a load-balanced Microsoft SQL Server running on 2008 R2. Now, it is important to remember that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized. This is to ensure that the content that the user is requesting is available on all the backend servers. Microsoft SQL Server supports different types of availability solutions, such as replication. You can read more about it at http://technet.microsoft.com/en-us/library/ms152565(v=sql.105).aspx. Using transactional replication is recommended, as this replicates changes to different SQL servers as they occur. As of now, the load balancing solution for MSSQL, also called DataStream, supports only certain versions of SQL Server. They can be viewed at http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-dbproxy-reference-protocol-con.html. Also, only certain authentication methods are supported. As of now, only SQL authentication is supported for MSSQL. The steps to set up load balancing are as follows: We need to add the backend SQL servers to the list of servers. Next, we need to create a custom monitor that we are going to use against the backend servers. Before we create the monitor, we can create a custom database within SQL Server that NetScaler can query. Open Object Explorer in the SQL Management Studio, and right-click on the Database folder. Then, select New Database, as shown in the following screenshot: We can name it ns and leave the rest at their default values, and then click on OK. After that is done, go to the Database folder in Object Explorer. Then, right-click on Tables, and click on Create New Table. Here, we need to enter a column name (for example, test), and choose nchar(10) as the data type. Then, click on Save Table and we are presented with a dialog box, which gives us the option to change the table name. Here, we can type test again. We have now created a database called ns with a table called test, which contains a column also called test. This is an empty database that NetScaler will query to verify connectivity to the SQL server. Now, we can go back to NetScaler and continue with the set up. First, we need to add a DBA user. This can be done by going to System | User Administration | Database Users, and clicking on Add. Here, we need to enter a username and password for a SQL user who is allowed to log in and query the database. After that is done, we can create a monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. As the type, choose MSSQL-ECV, and then go to the Special Parameters pane. Here, we need to enter the following information: Database: This is ns in this example. Query: This is a SQL query, which is run against the database. In our example, we type select * from test. User Name: Here we need to enter the name of the DBA user we created earlier. In my case, it is sa. Rule: Here, we enter an expression that defines how NetScaler will verify whether the SQL server is up or not. In our example, it is MSSQL.RES. ATLEAST_ROWS_COUNT(0), which means that when NetScaler runs the query against the database, it should return zero rows from that table. Protocol Version: Here, we need to choose the one that works with the SQL Server version we are running. In my case, I have SQL Server 2012. So, the monitor now looks like the following screenshot: It is important that the database we created earlier should be created on all the SQL servers we are going to load balance using NetScaler. So, now that we are done with the monitor, we can bind it to a service. When setting up the services against the SQL servers, remember to choose MSSQL as the protocol and 1433 as the port, and then bind the custom-made monitor to it. After that, we need to create a virtual load-balanced service. An important point to note here is that we choose MSSQL as the protocol and use the same port nr as we used before 1433. We can use NetScaler to proxy connections between different versions of SQL Server. As our backend servers are not set up to connect the SQL 2012 version, we can present the vServer as a 2008 server. For example, if we have an application that runs on SQL Server 2008, we can make some custom changes to the vServer. To create the load-balanced vServer, go to Advanced | MSSQL | Server Options. Here, we can choose different versions, as shown in the following screenshot: After we are done with the creation of the vServer, we can test it by opening a connection using the SQL Management Server to the VIP address. We can verify whether the connection is load balancing properly by running the following CLI command: Stat lb vserver nameofvserver Summary In this article, we followed steps for setting up a load-balanced Microsoft SQL Server running on 2008 R2 remembering that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized to ensure that the content that the user is requesting is available on all the backend servers. Resources for Article: Understanding Citrix Provisioning Services 7.0 Getting Started with the Citrix Access Gateway Product Family Managing Citrix Policies
Read more
  • 0
  • 0
  • 6841
article-image-how-to-recognize-patterns-with-neural-networks-in-java
Kunal Chaudhari
04 Jan 2018
8 min read
Save for later

How to recognize Patterns with Neural Networks in Java

Kunal Chaudhari
04 Jan 2018
8 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Fabio M. Soares and Alan M. F. Souza, titled Neural Network Programming with Java Second Edition. This book covers the current state-of-art in the field of neural network that helps you understand and design basic to advanced neural networks with Java.[/box] Our article explores the power of neural networks in pattern recognition by showcasing how to recognize digits from 0 to 9 in an image. For pattern recognition, the neural network architectures that can be applied are MLPs (supervised) and the Kohonen Network (unsupervised). In the first case, the problem should be set up as a classification problem, that is, the data should be transformed into the X-Y dataset, where for every data record in X there should be a corresponding class in Y. The output of the neural network for classification problems should have all of the possible classes, and this may require preprocessing of the output records. For the other case, unsupervised learning, there is no need to apply labels to the output, but the input data should be properly structured. To remind you, the schema of both neural networks are shown in the next figure: Data pre-processing We have to deal with all possible types of data, i.e., numerical (continuous and discrete) and categorical (ordinal or unscaled).  However, here we have the possibility of performing pattern recognition on multimedia content, such as images and videos. So, can multimedia could be handled? The answer to this question lies in the way these contents are stored in files. Images, for example, are written with a representation of small colored points called pixels. Each color can be coded in an RGB notation where the intensity of red, green, and blue define every color the human eye is able to see. Therefore an image of dimension 100x100 would have 10,000 pixels, each one having three values for red, green and blue, yielding a total of 30,000 points. That is the challenge for image processing in neural networks. Some methods, may reduce this huge number of dimensions. Afterwards an image can be treated as big matrix of numerical continuous values. For simplicity, we are applying only gray-scale images with small dimensions in this article. Text recognition (optical character recognition) Many documents are now being scanned and stored as images, making it necessary to convert these documents back into text, for a computer to apply edition and text processing. However, this feature involves a number of challenges: Variety of text font Text size Image noise Manuscripts In spite of that, humans can easily interpret and read even texts produced in a bad quality image. This can be explained by the fact that humans are already familiar with text characters and the words in their language. Somehow the algorithm must become acquainted with these elements (characters, digits, signalization, and so on), in order to successfully recognize texts in images. Digit recognition Although there are a variety of tools available on the market for OCR, it still remains a big challenge for an algorithm to properly recognize texts in images. So, we will be restricting our application to in a smaller domain, so that we'll face simpler problems. Therefore, in this article, we are going to implement a neural network to recognize digits from 0 to 9 represented on images. Also, the images will have standardized and small dimensions, for the sake of simplicity. Digit representation We applied the standard dimension of 10x10 (100 pixels) in gray scaled images, resulting in 100 values of gray scale for each image: In the preceding image we have a sketch representing the digit 3 at the left and a corresponding matrix with gray values for the same digit, in gray scale. We apply this pre-processing in order to represent all ten digits in this application. Implementation in Java To recognize optical characters, data to train and to test neural network was produced by us. In this example, digits from 0 (super black) to 255 (super white) were considered. According to pixel disposal, two versions of each digit data were created: one to train and another to test. Classification techniques will be used here. Generating data Numbers from zero to nine were drawn in the Microsoft Paint ®. The images have been converted into matrices, from which some examples are shown in the following image. All pixel values between zero and nine are grayscale: For each digit we generated five variations, where one is the perfect digit, and the others contain noise, either by the drawing, or by the image quality. Each matrix row was merged into vectors (Dtrain and Dtest) to form a pattern that will be used to train and test the neural network. Therefore, the input layer of the neural network will be composed of 101 neurons. The output dataset was represented by ten patterns. Each one has a more expressive value (one) and the rest of the values are zero. Therefore, the output layer of the neural network will have ten neurons. Neural architecture So, in this application our neural network will have 100 inputs (for images that have a 10x10 pixel size) and ten outputs, the number of hidden neurons remaining unrestricted. We created a class called DigitExample to handle this application. The neural network architecture was chosen with these parameters: Neural network type: MLP Training algorithm: Backpropagation Number of hidden layers: 1 Number of neurons in the hidden layer: 18 Number of epochs: 1000 Minimum overall error: 0.001 Experiments Now, as has been done in other cases previously presented, let's find the best neural network topology training several nets. The strategy to do that is summarized in the following table:   Experiment Learning rate Activation Functions #1 0.3 Hidden Layer: SIGLOG Output Layer: LINEAR #2 0.5 Hidden Layer: SIGLOG Output Layer: LINEAR #3 0.8 Hidden Layer: SIGLOG Output Layer: LINEAR #4 0.3 Hidden Layer: HYPERTAN Output Layer: LINEAR #5 0.5 Hidden Layer: SIGLOG Output Layer: LINEAR #6 0.8 Hidden Layer: SIGLOG Output Layer: LINEAR #7 0.3 Hidden Layer: HYPERTAN Output Layer: SIGLOG #8 0.5 Hidden Layer: HYPERTAN Output Layer: SIGLOG #9 0.8 Hidden Layer: HYPERTAN Output Layer: SIGLOG The following DigitExample class code defines how to create a neural network to read from digit data: // enter neural net parameter via keyboard (omitted) // load dataset from external file (omitted) // data normalization (omitted) // create ANN and define parameters to TRAIN: Backpropagation backprop = new Backpropagation(nn, neuralDataSetToTrain, LearningAlgorithm.LearningMode.BATCH); backprop.setLearningRate( typedLearningRate ); backprop.setMaxEpochs( typedEpochs ); backprop.setGeneralErrorMeasurement(Backpropagation.ErrorMeasurement.SimpleError); backprop.setOverallErrorMeasurement(Backpropagation.ErrorMeasurement.MSE); backprop.setMinOverallError(0.001); backprop.setMomentumRate(0.7); backprop.setTestingDataSet(neuralDataSetToTest); backprop.printTraining = true; backprop.showPlotError = true; // train ANN: try {    backprop.forward();    //neuralDataSetToTrain.printNeuralOutput();    backprop.train();    System.out.println("End of training");    if (backprop.getMinOverallError() >= backprop.getOverallGeneralError()) {        System.out.println("Training successful!"); } else {        System.out.println("Training was unsuccessful"); }    System.out.println("Overall Error:" + String.valueOf(backprop.getOverallGeneralError()));    System.out.println("Min Overall Error:" + String.valueOf(backprop.getMinOverallError()));    System.out.println("Epochs of training:" + String.valueOf(backprop.getEpoch())); } catch (NeuralException ne) {    ne.printStackTrace(); } // test ANN (omitted) Results After running each experiment using the DigitExample class, excluding training and testing overall errors and the quantity of right number classifications using the test data (table above), it is possible observe that experiments #2 and #4 have the lowest MSE values. The differences between these two experiments are learning rate and activation function used in the output layer. Experiment Training overall error Testing overall error # Right number classifications #1 9.99918E-4 0.01221 2 by 10 #2 9.99384E-4 0.00140 5 by 10 #3 9.85974E-4 0.00621 4 by 10 #4 9.83387E-4 0.02491 3 by 10 #5 9.99349E-4 0.00382 3 by 10 #6 273.70 319.74 2 by 10 #7 1.32070 6.35136 5 by 10 #8 1.24012 4.87290 7 by 10 #9 1.51045 4.35602 3 by 10 The figure above shows the MSE evolution (train and test) by each epoch graphically by experiment #2. It is interesting to notice the curve stabilizes near the 30th epoch: The same graphic analysis was performed for experiment #8. It is possible to check the MSE curve stabilizes near the 200th epoch. As already explained, only MSE values might not be considered to attest neural net quality. Accordingly, the test dataset has verified the neural network generalization capacity. The next table shows the comparison between real output with noise and the neural net estimated output of experiment #2 and #8. It is possible to conclude that the neural network weights by experiment #8 can recognize seven digits patterns better than #2's: Output comparison Real output (test dataset) Digit 0.0 0.0        0.0     0.0     0.0     0.0     0.0     0.0     0.0     1.0 0.0 0.0        0.0     0.0     0.0     0.0     0.0     0.0     1.0     0.0 0.0 0.0        0.0     0.0     0.0        0.0     0.0     1.0     0.0     0.0 0.0 0.0        0.0     0.0     0.0     0.0     1.0     0.0     0.0     0.0 0.0 0.0        0.0     0.0     0.0     1.0     0.0     0.0     0.0     0.0 0.0    0.0        0.0     0.0     1.0     0.0     0.0     0.0     0.0     0.0 0.0 0.0        0.0     1.0     0.0     0.0     0.0     0.0     0.0     0.0 0.0 0.0        1.0     0.0     0.0     0.0     0.0     0.0     0.0     0.0 0.0 1.0        0.0     0.0     0.0     0.0     0.0     0.0     0.0     0.0 1.0 0.0        0.0     0.0     0.0     0.0     0.0     0.0     0.0     0.0 0 1 2 3 4 5 6 7 8 9 Estimated output (test dataset) – Experiment #2 Digit 0.20   0.26  0.09  -0.09  0.39   0.24  0.35   0.30  0.24   1.02 0.42  -0.23  0.39   0.06  0.11    0.16 0.43   0.25  0.17  -0.26 0.51   0.84  -0.17  0.02  0.16    0.27 -0.15  0.14  -0.34 -0.12 -0.20  -0.05  -0.58  0.20  -0.16     0.27 0.83 -0.56  0.42   0.35 0.24   0.05  0.72  -0.05  -0.25    -0.38 -0.33  0.66  0.05  -0.63 0.08   0.41  -0.21  0.41  0.59     -0.12 -0.54  0.27  0.38  0.00 -0.76  -0.35  -0.09  1.25  -0.78     0.55 -0.22  0.61  0.51  0.27 -0.15   0.11  0.54  -0.53  0.55     0.17 0.09  -0.72  0.03  0.12 0.03   0.41  0.49  -0.44  -0.01    0.05 -0.05 -0.03  -0.32 -0.30 0.63  -0.47  -0.15  0.17  0.38    -0.24 0.58   0.07  -0.16 0.54 0 (OK) 1 (ERR) 2 (ERR) 3 (OK) 4 (ERR) 5 (OK) 6 (OK) 7 (ERR) 8 (ERR) 9 (OK) Estimated output (test dataset) – Experiment #8 Digit 0.10 0.10    0.12 0.10 0.12    0.13 0.13 0.26    0.17 0.39 0.13 0.10    0.11 0.10 0.11    0.10 0.29    0.23 0.32 0.10 0.26 0.38    0.10 0.10 0.12    0.10 0.10 0.17    0.10 0.10 0.10 0.10    0.10 0.10 0.10    0.17 0.39 0.10    0.38 0.10 0.15 0.10    0.24 0.10 0.10    0.10 0.10 0.39    0.37 0.10 0.20 0.12    0.10 0.10 0.37    0.10 0.10 0.10    0.17 0.12 0.10 0.10    0.10 0.39 0.10    0.16 0.11 0.30    0.14 0.10 0.10 0.11    0.39 0.10 0.10    0.15 0.10 0.10    0.17 0.10 0.10 0.25    0.34 0.10 0.10    0.10 0.10 0.10    0.10 0.10 0.39 0.10    0.10 0.10 0.28    0.10 0.27 0.11    0.10 0.21 0 (OK) 1 (OK) 2 (OK) 3 (ERR) 4 (OK) 5 (ERR) 6 (OK) 7 (OK) 8 (ERR) 9 (OK) The experiments showed in this article have taken in consideration 10x10 pixel information images. We recommend that you try to use 20x20 pixel datasets to build a neural net able to classify digit images of this size. You should also change the training parameters of the neural net to achieve better classifications. To summarize, we applied neural network techniques to perform pattern recognition on a series of numbers from 0 to 9 in an image. The application here can be extended to any type of characters instead of digits, under the condition that the neural network should all be presented with the predefined characters. If you enjoyed this excerpt, check out the book Neural Network Programming with Java Second Edition to know more about leveraging the multi-platform feature of Java to build and run your personal neural networks everywhere.    
Read more
  • 0
  • 0
  • 6834

article-image-10-machine-learning-algorithms
Aaron Lazar
30 Nov 2017
7 min read
Save for later

10 machine learning algorithms every engineer needs to know

Aaron Lazar
30 Nov 2017
7 min read
When it comes to machine learning, it's all about the algorithms. But although machine learning algorithms are the bread and butter of a data scientists job role, it's not always as straightforward as simply picking up an algorithm and running with it. Algorithm selection is incredibly important and often very challenging. There's always a number of things you have to take into consideration, such as: Accuracy: While accuracy is important, it’s not always necessary. In many cases, an approximation is sufficient, in which case, one shouldn’t look for accuracy while giving up on the processing time. Training time: This goes hand in hand with accuracy and is not the same for all algorithms. The training time might go up if there are more parameters as well. When time is a big constraint, you should choose an algorithm wisely. Linearity: Algorithms that follow linearity assume that the data trends follow a linear path. While this is good for some problems, for others it can result in lowered accuracy. Once you've taken those 3 considerations on board you can start to dig a little deeper. Kaggle did a survey in 2017 asking their readers which algorithms - or 'data science methods' more broadly - respondents were most likely to use at work. Below is a screenshot of the results. Kaggle's research offers a useful insight into the algorithms actually being used by data scientists and data analysts today. But we've brought together the types of machine learning algorithms that are most important. Every algorithm is useful in different circumstances - the skill is knowing which one to use and when. 10 machine learning algorithms Linear regression This is clearly one of the most interpretable ML algorithms. It requires minimal tuning and is easy to explain, being the key reason for its popularity. It shows the relationship between two or more variables and how a change in one of the dependent variables impacts the independent variable. It is used for forecasting sales based on trends, as well as for risk assessment. Although with a relatively low level of accuracy, a few parameters needed and lesser training times makes it’s quite popular among beginners. Logistic regression Logistic regression is typically viewed as a special form of Linear Regression, where the output variable is categorical. It’s generally used to predict a binary outcome i.e.True or False, 1 or 0, Yes or No, for a set of independent variables. As you would have already guessed, this algorithm is generally used when the dependent variable is binary. Like to Linear regression, logistic regression has a low level of accuracy, fewer parameters and lesser training times. It goes without saying that it’s quite popular among beginners too. Decision trees These algorithms are mainly decision support tools that use tree-like graphs or models of decisions and possible consequences, including outcomes based on chance-event, utilities, etc. To put it in simple words, you can say decision trees are the least number of yes/no questions to be asked, in order to identify the probability of making the right decision, as often as possible. It lets you tackle the problem at hand in a structured, systematic way to logically deduce the outcome. Decision Trees are excellent when it comes to accuracy but their training times are a bit longer as compared to other algorithms. They also require a moderate number of parameters, making them not so complicated to arrive at a good combination. Naive Bayes  This is a type of classification ML algorithm that’s based on the popular probability theorem by Bayes. It is one of the most popular learning algorithms. It groups similarities together and is usually used for document classification, facial recognition software or for predicting diseases. It generally works well when you have a medium to large data set to train your models. These have moderate training times and make use of linearity. While this is good, linearity might also bring down accuracy for certain problems. They also do not bank on too many parameters, making it easy to arrive at a good combination, although at the cost of accuracy. Random forest Without a doubt, this one is a popular go-to machine learning algorithm that creates a group of decision trees with random subsets of the data. It uses the ML method of classification and regression. It is simple to use, as just a few lines of code are enough to implement the algorithm. It is used by banks in order to predict high-risk loan applicants or even by hospitals to predict whether a particular patient is likely to develop a chronic disease or not. With a high accuracy level and moderate training time, it is quite efficient to implement. Moreover, it has average parameters. K-Means K-Means is a popular unsupervised algorithm that is used for cluster analysis and is an iterative and non-deterministic method. It operates on a given dataset through a predefined number of clusters. The output of a K-Means algorithm will be k clusters, with input data partitioned among these clusters. Biggies like Google use K-means to cluster pages by similarities and discover the relevance of search results. This algorithm has a moderate training time and has good accuracy. It doesn’t consist of many parameters, meaning that it’s easy to arrive at the best possible combination. K nearest neighbors K nearest neighbors is a very popular machine learning algorithm which can be used for both regression as well as classification, although it’s majorly used for the latter. Although it is simple, it is extremely effective. It takes little to no time to train, although its accuracy can be heavily degraded by high dimension data since there is not much of a difference between the nearest neighbor and the farthest one. Support vector machines SVMs are one of the several examples of supervised ML algorithms dealing with classification. They can be used for either regression or classification, in situations where the training dataset teaches the algorithm about specific classes, so that it can then classify the newly included data. What sets them apart from other machine learning algorithms is that they are able to separate classes quicker and with lesser overfitting than several other classification algorithms. A few of the biggest pain points that have been resolved using SVMs are display advertising, image-based gender detection and image classification with large feature sets. These are moderate in their accuracy, as well as their training times, mostly because it assumes linear approximation. On the other hand, they require an average number of parameters to get the work done. Ensemble methods Ensemble methods are techniques that build a set of classifiers and combine the predictions to classify new data points. Bayesian averaging is originally an ensemble method, but newer algorithms include error-correcting output coding, etc. Although ensemble methods allow you to devise sophisticated algorithms and produce results with a high level of accuracy, they are not preferred so much in industries where interpretability of the algorithm is more important. However, with their high level of accuracy, it makes sense to use them in fields like healthcare, where even the minutest improvement can add a lot of value. Artificial neural networks Artificial neural networks are so named because they mimic the functioning and structure of biological neural networks. In these algorithms, information flows through the network and depending on the input and output, the neural network changes in response. One of the most common use cases for ANNs is speech recognition, like in voice-based services. As the information fed to them grows, these algorithms improve. However, artificial neural networks are imperfect. With great power comes longer training times. They also have several more parameters as compared to other algorithms. That being said, they are very flexible and customizable. If you want to skill-up in implementing Machine Learning Algorithms, you can check out the following books from Packt: Data Science Algorithms in a Week by Dávid Natingga Machine Learning Algorithms by Giuseppe Bonaccorso
Read more
  • 0
  • 0
  • 6830