Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-postgres-add
Packt
27 Feb 2015
7 min read
Save for later

Postgres Add-on

Packt
27 Feb 2015
7 min read
In this article by Patrick Espake, author of the book Learning Heroku Postgres, you will learn how to install and set up PostgreSQL and how to create an app using Postgres. (For more resources related to this topic, see here.) Local setup You need to install PostgreSQL on your computer; this installation is recommended because some commands of the Postgres add-on require PostgreSQL to be installed. Besides that, it's a good idea for your development database to be similar to your production database; this avoids problems between these environments. Next, you will learn how to set up PostgreSQL on Mac OS X, Windows, and Linux. In addition to pgAdmin, this is the most popular and rich feature in PostgreSQL's administration and development platform. The versions recommended for installation are PostgreSQL 9.4.0 and pgAdmin 1.20.0, or the latest available versions. Setting up PostgreSQL on Mac OS X The Postgres.app application is the simplest way to get started with PostgreSQL on Mac OS X, it contains many features in a single installation package: PostgreSQL 9.4.0 PostGIS 2.1.4 Procedural languages: PL/pgSQL, PL/Perl, PL/Python, and PLV8 (JavaScript) Popular extensions such as hstore, uuid-ossp, and others Many command-line tools for managing PostgreSQL and convenient tools for GIS The following screenshot displays the postgresapp website: For installation, visit the address http://postgresapp.com/, carry out the appropriate download, drag it to the applications directory, and then double-click to open. The other alternatives for installing PostgreSQL are to use the default graphic installer, Fink, MacPorts, or Homebrew. All of them are available at http://www.postgresql.org/download/macosx. To install pgAdmin, you should visit http://www.pgadmin.org/download/macosx.php, download the latest available version, and follow the installer instructions. Setting up PostgreSQL on Windows PostgreSQL on Windows is provided using a graphical installer that includes the PostgreSQL server, pgAdmin, and the package manager that is used to download and install additional applications and drivers for PostgreSQL. To install PostgreSQL, visit http://www.postgresql.org/download/windows, click on the download link, and select the the appropriate Windows version: 32 bit or 64 bit. Follow the instructions provided by the installer. After installing PostgreSQL on Windows, you need to set the PATH environment variable so that the psql, pg_dump and pg_restore commands can work through the Command Prompt. Perform the following steps: Open My Computer. Right-click on My Computer and select Properties. Click on Advanced System Settings. Click on the Environment Variables button. From the System variables box, select the Path variable. Click on Edit. At the end of the line, add the bin directory of PostgreSQL: c:Program FilesPostgreSQL9.4bin;c:Program FilesPostgreSQL9.4lib. Click on the OK button to save. The directory follows the pattern c:Program FilesPostgreSQLVERSION..., check your PostgreSQL version. Setting up PostgreSQL on Linux The great majority of Linux distributions already have PostgreSQL in their package manager. You can search the appropriate package for your distribution and install it. If your distribution is Debian or Ubuntu, you can install it with the following command: $ sudo apt-get install postgresql If your Linux distribution is Fedora, Red Hat, CentOS, Scientific Linux, or Oracle Enterprise Linux, you can use the YUM package manager to install PostgreSQL: $ sudo yum install postgresql94-server$ sudo service postgresql-9.4 initdb$ sudo chkconfig postgresql-9.4 on$ sudo service postgresql-9.4 start If your Linux distribution doesn't have PostgreSQL in your package manager, you can install it using the Linux installer. Just visit the website http://www.postgresql.org/download/linux, choose the appropriate installer, 32-bit or 64-bits, and follow the install instructions. You can install pgAdmin through the package manager of your Linux distribution; for Debian or Ubuntu you can use the following command: $ sudo apt-get install pgadmin3 For Linux distributions that use the YUM package manager, you can install through the following command: $ sudo yum install pgadmin3 If your Linux distribution doesn't have pgAdmin in its package manager, you can download and install it following the instructions provided at http://www.pgadmin.org/download/. Creating a local database For the examples in this article, you will need to have a local database created. You will create a new database called my_local_database through pgAdmin. To create the new database, perform the following steps: Open pgAdmin. Connect to the database server through the access credentials that you chose in the installation process. Click on the Databases item in the tree view. Click on the menu Edit -> New Object -> New database. Type the name my_local_database for the database. Click on the OK button to save. Creating a new local database called my_local_database Creating a new app Many features in Heroku can be implemented in two different ways; the first is via the Heroku client, which is installed through the Heroku Toolbelt, and the other is through the web Heroku dashboard. In this section, you will learn how to use both of them. Via the Heroku dashboard Access the website https://dashboard.heroku.com and login. After that, click on the plus sign at the top of the dashboard to create a new app and the following screen will be shown: Creating an app In this step, you should provide the name of your application. In the preceding example, it's learning-heroku-postgres-app. You can choose a name you prefer. Select which region you want to host it on; two options are available: United States or Europe. Heroku doesn't allow duplicated names for applications; each application name supplied is global and, after it has been used once, it will not be available for another person. It can happen that you choose a name that is already being used. In this case, you should choose another name. Choose the best option for you, it is usually recommended you select the region that is closest to you to decrease server response time. Click on the Create App button. Then Heroku will provide some information to perform the first deploy of your application. The website URL and Git repository are created using the following addresses: http://your-app-name.herokuapp.com and [email protected]/your-app-name.git. learning-heroku-postgres-app created Next you will create a directory in your computer and link it with Heroku to perform future deployments of your source code. Open your terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init$ heroku git:remote -a your-app-nameGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Via the Heroku client Creating a new application via the Heroku client is very simple. The first step is to create the application directory on your computer. For that, open the Terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init After that you need to create a new Heroku application through the command: $ heroku apps:create your-app-nameCreating your-app-name... done, stack is cedar-14https://your-app-name.herokuapp.com/ | HYPERLINK "https://git.heroku.com/your-app-name.git" https://git.heroku.com/your-app-name.gitGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Another very common case is when you already have a Git repository on your computer with the application's source code and you want to deploy it on Heroku. In this case, you must run the heroku apps:create your-app-name command inside the application directory and the link with Heroku will be created. Summary In this article, you learned how to configure your local environment to work with PostgreSQL and pgAdmin. Besides that, you have also understood how to install Heroku Postgres in your application. In addition, you have understood that the first database is created automatically when the Heroku Postgres add-on is installed in your application and there are several PostgreSQL databases as well. You also learned that the great majority of tasks can be performed in two ways: via the Heroku Client and via the Heroku dashboard. Resources for Article: Further resources on this subject: Building Mobile Apps [article] Managing Heroku from the Command Line [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 1634

article-image-getting-and-running-cassandra
Packt
27 Feb 2015
20 min read
Save for later

Getting Up and Running with Cassandra

Packt
27 Feb 2015
20 min read
As an application developer, you have almost certainly worked with databases extensively. You must have built products using relational databases like MySQL and PostgreSQL, and perhaps experimented with a document store like MongoDB or a key-value database like Redis. While each of these tools has its strengths, you will now consider whether a distributed database like Cassandra might be the best choice for the task at hand. In this article by Mat Brown, author of the book Learning Apache Cassandra, we'll talk about the major reasons to choose Cassandra from among the many database options available to you. Having established that Cassandra is a great choice, we'll go through the nuts and bolts of getting a local Cassandra installation up and running. By the end of this article, you'll know: When and why Cassandra is a good choice for your application How to install Cassandra on your development machine How to interact with Cassandra using cqlsh How to create a keyspace (For more resources related to this topic, see here.) What Cassandra offers, and what it doesn't Cassandra is a fully distributed, masterless database, offering superior scalability and fault tolerance to traditional single master databases. Compared with other popular distributed databases like Riak, HBase, and Voldemort, Cassandra offers a uniquely robust and expressive interface for modeling and querying data. What follows is an overview of several desirable database capabilities, with accompanying discussions of what Cassandra has to offer in each category. Horizontal scalability Horizontal scalability refers to the ability to expand the storage and processing capacity of a database by adding more servers to a database cluster. A traditional single-master database's storage capacity is limited by the capacity of the server that hosts the master instance. If the data set outgrows this capacity, and a more powerful server isn't available, the data set must be sharded among multiple independent database instances that know nothing of each other. Your application bears responsibility for knowing to which instance a given piece of data belongs. Cassandra, on the other hand, is deployed as a cluster of instances that are all aware of each other. From the client application's standpoint, the cluster is a single entity; the application need not know, nor care, which machine a piece of data belongs to. Instead, data can be read or written to any instance in the cluster, referred to as a node; this node will forward the request to the instance where the data actually belongs. The result is that Cassandra deployments have an almost limitless capacity to store and process data; when additional capacity is required, more machines can simply be added to the cluster. When new machines join the cluster, Cassandra takes care of rebalancing the existing data so that each node in the expanded cluster has a roughly equal share. Cassandra is one of the several popular distributed databases inspired by the Dynamo architecture, originally published in a paper by Amazon. Other widely used implementations of Dynamo include Riak and Voldemort. You can read the original paper at http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf. High availability The simplest database deployments are run as a single instance on a single server. This sort of configuration is highly vulnerable to interruption: if the server is affected by a hardware failure or network connection outage, the application's ability to read and write data is completely lost until the server is restored. If the failure is catastrophic, the data on that server might be lost completely. A master-follower architecture improves this picture a bit. The master instance receives all write operations, and then these operations are replicated to follower instances. The application can read data from the master or any of the follower instances, so a single host becoming unavailable will not prevent the application from continuing to read data. A failure of the master, however, will still prevent the application from performing any write operations, so while this configuration provides high read availability, it doesn't completely provide high availability. Cassandra, on the other hand, has no single point of failure for reading or writing data. Each piece of data is replicated to multiple nodes, but none of these nodes holds the authoritative master copy. If a machine becomes unavailable, Cassandra will continue writing data to the other nodes that share data with that machine, and will queue the operations and update the failed node when it rejoins the cluster. This means in a typical configuration, two nodes must fail simultaneously for there to be any application-visible interruption in Cassandra's availability. How many copies?When you create a keyspace - Cassandra's version of a database - you specify how many copies of each piece of data should be stored; this is called the replication factor. A replication factor of 3 is a common and good choice for many use cases. Write optimization Traditional relational and document databases are optimized for read performance. Writing data to a relational database will typically involve making in-place updates to complicated data structures on disk, in order to maintain a data structure that can be read efficiently and flexibly. Updating these data structures is a very expensive operation from a standpoint of disk I/O, which is often the limiting factor for database performance. Since writes are more expensive than reads, you'll typically avoid any unnecessary updates to a relational database, even at the expense of extra read operations. Cassandra, on the other hand, is highly optimized for write throughput, and in fact never modifies data on disk; it only appends to existing files or creates new ones. This is much easier on disk I/O and means that Cassandra can provide astonishingly high write throughput. Since both writing data to Cassandra, and storing data in Cassandra, are inexpensive, denormalization carries little cost and is a good way to ensure that data can be efficiently read in various access scenarios. Because Cassandra is optimized for write volume, you shouldn't shy away from writing data to the database. In fact, it's most efficient to write without reading whenever possible, even if doing so might result in redundant updates. Just because Cassandra is optimized for writes doesn't make it bad at reads; in fact, a well-designed Cassandra database can handle very heavy read loads with no problem. Structured records The first three database features we looked at are commonly found in distributed data stores. However, databases like Riak and Voldemort are purely key-value stores; these databases have no knowledge of the internal structure of a record that's stored at a particular key. This means useful functions like updating only part of a record, reading only certain fields from a record, or retrieving records that contain a particular value in a given field are not possible. Relational databases like PostgreSQL, document stores like MongoDB, and, to a limited extent, newer key-value stores like Redis do have a concept of the internal structure of their records, and most application developers are accustomed to taking advantage of the possibilities this allows. None of these databases, however, offer the advantages of a masterless distributed architecture. In Cassandra, records are structured much in the same way as they are in a relational database—using tables, rows, and columns. Thus, applications using Cassandra can enjoy all the benefits of masterless distributed storage while also getting all the advanced data modeling and access features associated with structured records. Secondary indexes A secondary index, commonly referred to as an index in the context of a relational database, is a structure allowing efficient lookup of records by some attribute other than their primary key. This is a widely useful capability: for instance, when developing a blog application, you would want to be able to easily retrieve all of the posts written by a particular author. Cassandra supports secondary indexes; while Cassandra's version is not as versatile as indexes in a typical relational database, it's a powerful feature in the right circumstances. Efficient result ordering It's quite common to want to retrieve a record set ordered by a particular field; for instance, a photo sharing service will want to retrieve the most recent photographs in descending order of creation. Since sorting data on the fly is a fundamentally expensive operation, databases must keep information about record ordering persisted on disk in order to efficiently return results in order. In a relational database, this is one of the jobs of a secondary index. In Cassandra, secondary indexes can't be used for result ordering, but tables can be structured such that rows are always kept sorted by a given column or columns, called clustering columns. Sorting by arbitrary columns at read time is not possible, but the capacity to efficiently order records in any way, and to retrieve ranges of records based on this ordering, is an unusually powerful capability for a distributed database. Immediate consistency When we write a piece of data to a database, it is our hope that that data is immediately available to any other process that may wish to read it. From another point of view, when we read some data from a database, we would like to be guaranteed that the data we retrieve is the most recently updated version. This guarantee is called immediate consistency, and it's a property of most common single-master databases like MySQL and PostgreSQL. Distributed systems like Cassandra typically do not provide an immediate consistency guarantee. Instead, developers must be willing to accept eventual consistency, which means when data is updated, the system will reflect that update at some point in the future. Developers are willing to give up immediate consistency precisely because it is a direct tradeoff with high availability. In the case of Cassandra, that tradeoff is made explicit through tunable consistency. Each time you design a write or read path for data, you have the option of immediate consistency with less resilient availability, or eventual consistency with extremely resilient availability. Discretely writable collections While it's useful for records to be internally structured into discrete fields, a given property of a record isn't always a single value like a string or an integer. One simple way to handle fields that contain collections of values is to serialize them using a format like JSON, and then save the serialized collection into a text field. However, in order to update collections stored in this way, the serialized data must be read from the database, decoded, modified, and then written back to the database in its entirety. If two clients try to perform this kind of modification to the same record concurrently, one of the updates will be overwritten by the other. For this reason, many databases offer built-in collection structures that can be discretely updated: values can be added to, and removed from collections, without reading and rewriting the entire collection. Cassandra is no exception, offering list, set, and map collections, and supporting operations like "append the number 3 to the end of this list". Neither the client nor Cassandra itself needs to read the current state of the collection in order to update it, meaning collection updates are also blazingly efficient. Relational joins In real-world applications, different pieces of data relate to each other in a variety of ways. Relational databases allow us to perform queries that make these relationships explicit, for instance, to retrieve a set of events whose location is in the state of New York (this is assuming events and locations are different record types). Cassandra, however, is not a relational database, and does not support anything like joins. Instead, applications using Cassandra typically denormalize data and make clever use of clustering in order to perform the sorts of data access that would use a join in a relational database. For data sets that aren't already denormalized, applications can also perform client-side joins, which mimic the behavior of a relational database by performing multiple queries and joining the results at the application level. Client-side joins are less efficient than reading data that has been denormalized in advance, but offer more flexibility. MapReduce MapReduce is a technique for performing aggregate processing on large amounts of data in parallel; it's a particularly common technique in data analytics applications. Cassandra does not offer built-in MapReduce capabilities, but it can be integrated with Hadoop in order to perform MapReduce operations across Cassandra data sets, or Spark for real-time data analysis. The DataStax Enterprise product provides integration with both of these tools out-of-the-box. Comparing Cassandra to the alternatives Now that you've got an in-depth understanding of the feature set that Cassandra offers, it's time to figure out which features are most important to you, and which database is the best fit. The following table lists a handful of commonly used databases, and key features that they do or don't have: Feature Cassandra PostgreSQL MongoDB Redis Riak Structured records Yes Yes Yes Limited No Secondary indexes Yes Yes Yes No Yes Discretely writable collections Yes Yes Yes Yes No Relational joins No Yes No No No Built-in MapReduce No No Yes No Yes Fast result ordering Yes Yes Yes Yes No Immediate consistency Configurable at query level Yes Yes Yes Configurable at cluster level Transparent sharding Yes No Yes No Yes No single point of failure Yes No No No Yes High throughput writes Yes No No Yes Yes As you can see, Cassandra offers a unique combination of scalability, availability, and a rich set of features for modeling and accessing data. Installing Cassandra Now that you're acquainted with Cassandra's substantial powers, you're no doubt chomping at the bit to try it out. Happily, Cassandra is free, open source, and very easy to get running. Installing on Mac OS X First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks something like the following: java version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) Pay particular attention to the java version listed: if it's lower than 1.7.0_25, you'll need to install a new version. If you have an older version of Java or if Java isn't installed at all, head to https://www.java.com/en/download/mac_download.jsp and follow the download instructions on the page. You'll need to set up your environment so that Cassandra knows where to find the latest version of Java. To do this, set up your JAVA_HOME environment variable to the install location, and your PATH to include the executable in your new Java installation as follows: $ export JAVA_HOME="/Library/Internet Plug- Ins/JavaAppletPlugin.plugin/Contents/Home"$ export PATH="$JAVA_HOME/bin":$PATH You should put these two lines at the bottom of your .bashrc file to ensure that things still work when you open a new terminal. The installation instructions given earlier assume that you're using the latest version of Mac OS X (at the time of writing this, 10.10 Yosemite). If you're running a different version of OS X, installing Java might require different steps. Check out https://www.java.com/en/download/faq/java_mac.xml for detailed installation information. Once you've got the right version of Java, you're ready to install Cassandra. It's very easy to install Cassandra using Homebrew; simply type the following: $ brew install cassandra$ pip install cassandra-driver cql$ cassandra Here's what we just did: Installed Cassandra using the Homebrew package manager Installed the CQL shell and its dependency, the Python Cassandra driver Started the Cassandra server Installing on Ubuntu First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks similar to the following: java version "1.7.0_65"OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3- 0ubuntu0.14.04.1)OpenJDK 64-bit Server VM (build 24.65-b04, mixed mode) Pay particular attention to the java version listed: it should start with 1.7. If you have an older version of Java, or if Java isn't installed at all, you can install the correct version using the following command: $ sudo apt-get install openjdk-7-jre-headless Once you've got the right version of Java, you're ready to install Cassandra. First, you need to add Apache's Debian repositories to your sources list. Add the following lines to the file /etc/apt/sources.list: deb http://www.apache.org/dist/cassandra/debian 21x maindeb-src http://www.apache.org/dist/cassandra/debian 21x main In the Terminal application, type the following into the command prompt: $ gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D$ gpg --export --armor F758CE318D77295D | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00$ gpg --export --armor 2B5C1B00 | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 0353B12C$ gpg --export --armor 0353B12C | sudo apt-key add -$ sudo apt-get update$ sudo apt-get install cassandra$ cassandra Here's what we just did: Added the Apache repositories for Cassandra 2.1 to our sources list Added the public keys for the Apache repo to our system and updated our repository cache Installed Cassandra Started the Cassandra server Installing on Windows The easiest way to install Cassandra on Windows is to use the DataStax Community Edition. DataStax is a company that provides enterprise-level support for Cassandra; they also release Cassandra packages at both free and paid tiers. DataStax Community Edition is free, and does not differ from the Apache package in any meaningful way. DataStax offers a graphical installer for Cassandra on Windows, which is available for download at planetcassandra.org/cassandra. On this page, locate Windows Server 2008/Windows 7 or Later (32-Bit) from the Operating System menu (you might also want to look for 64-bit if you run a 64-bit version of Windows), and choose MSI Installer (2.x) from the version columns. Download and run the MSI file, and follow the instructions, accepting the defaults: Once the installer completes this task, you should have an installation of Cassandra running on your machine. Bootstrapping the project We will build an application called MyStatus, which allows users to post status updates for their friends to read. CQL – the Cassandra Query Language Since this is about Cassandra and not targeted to users of any particular programming language or application framework, we will focus entirely on the database interactions that MyStatus will require. Code examples will be in Cassandra Query Language (CQL). Specifically, we'll use version 3.1.1 of CQL, which is available in Cassandra 2.0.6 and later versions. As the name implies, CQL is heavily inspired by SQL; in fact, many CQL statements are equally valid SQL statements. However, CQL and SQL are not interchangeable. CQL lacks a grammar for relational features such as JOIN statements, which are not possible in Cassandra. Conversely, CQL is not a subset of SQL; constructs for retrieving the update time of a given column, or performing an update in a lightweight transaction, which are available in CQL, do not have an SQL equivalent. Interacting with Cassandra Most common programming languages have drivers for interacting with Cassandra. When selecting a driver, you should look for libraries that support the CQL binary protocol, which is the latest and most efficient way to communicate with Cassandra. The CQL binary protocol is a relatively new introduction; older versions of Cassandra used the Thrift protocol as a transport layer. Although Cassandra continues to support Thrift, avoid Thrift-based drivers, as they are less performant than the binary protocol. Here are CQL binary drivers available for some popular programming languages: Language Driver Available at Java DataStax Java Driver github.com/datastax/java-driver Python DataStax Python Driver github.com/datastax/python-driver Ruby DataStax Ruby Driver github.com/datastax/ruby-driver C++ DataStax C++ Driver github.com/datastax/cpp-driver C# DataStax C# Driver github.com/datastax/csharp-driver JavaScript (Node.js) node-cassandra-cql github.com/jorgebay/node-cassandra-cql PHP phpbinarycql github.com/rmcfrazier/phpbinarycql While you will likely use one of these drivers in your applications, you can simply use the cqlsh tool, which is a command-line interface for executing CQL queries and viewing the results. To start cqlsh on OS X or Linux, simply type cqlsh into your command line; you should see something like this: $ cqlshConnected to Test Cluster at localhost:9160.[cqlsh 4.1.1 | Cassandra 2.0.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]Use HELP for help.cqlsh> On Windows, you can start cqlsh by finding the Cassandra CQL Shell application in the DataStax Community Edition group in your applications. Once you open it, you should see the same output we just saw. Creating a keyspace A keyspace is a collection of related tables, equivalent to a database in a relational system. To create the keyspace for our MyStatus application, issue the following statement in the CQL shell:    CREATE KEYSPACE "my_status"   WITH REPLICATION = {      'class': 'SimpleStrategy', 'replication_factor': 1    }; Here we created a keyspace called my_status. When we create a keyspace, we have to specify replication options. Cassandra provides several strategies for managing replication of data; SimpleStrategy is the best strategy as long as your Cassandra deployment does not span multiple data centers. The replication_factor value tells Cassandra how many copies of each piece of data are to be kept in the cluster; since we are only running a single instance of Cassandra, there is no point in keeping more than one copy of the data. In a production deployment, you would certainly want a higher replication factor; 3 is a good place to start. A few things at this point are worth noting about CQL's syntax: It's syntactically very similar to SQL; as we further explore CQL, the impression of similarity will not diminish. Double quotes are used for identifiers such as keyspace, table, and column names. As in SQL, quoting identifier names is usually optional, unless the identifier is a keyword or contains a space or another character that will trip up the parser. Single quotes are used for string literals; the key-value structure we use for replication is a map literal, which is syntactically similar to an object literal in JSON. As in SQL, CQL statements in the CQL shell must terminate with a semicolon. Selecting a keyspace Once you've created a keyspace, you would want to use it. In order to do this, employ the USE command: USE "my_status"; This tells Cassandra that all future commands will implicitly refer to tables inside the my_status keyspace. If you close the CQL shell and reopen it, you'll need to reissue this command. Summary In this article, you explored the reasons to choose Cassandra from among the many databases available, and having determined that Cassandra is a great choice, you installed it on your development machine. You had your first taste of the Cassandra Query Language when you issued your first command via the CQL shell in order to create a keyspace. You're now poised to begin working with Cassandra in earnest. Resources for Article: Further resources on this subject: Getting Started with Apache Cassandra [article] Basic Concepts and Architecture of Cassandra [article] About Cassandra [article]
Read more
  • 0
  • 0
  • 4835

article-image-agile-data-modeling-neo4j
Packt
25 Feb 2015
18 min read
Save for later

Agile data modeling with Neo4j

Packt
25 Feb 2015
18 min read
In this article by Sumit Gupta, author of the book Neo4j Essentials, we will discuss data modeling in Neo4j, which is evolving and flexible enough to adapt to changing business requirements. It captures the new data sources, entities, and their relationships as they naturally occur, allowing the database to easily adapt to the changes, which in turn results in an extremely agile development and provides quick responsiveness to changing business requirements. (For more resources related to this topic, see here.) Data modeling is a multistep process and involves the following steps: Define requirements or goals in the form of questions that need to be executed and answered by the domain-specific model. Once we have our goals ready, we can dig deep into the data and identify entities and their associations/relationships. Now, as we have our graph structure ready, the next step is to form patterns from our initial questions/goals and execute them against the graph model. This whole process is applied in an iterative and incremental manner, similar to what we do in agile, and has to be repeated again whenever we change our goals or add new goals/questions, which need to be answered by your graph model. Let's see in detail how data is organized/structured and implemented in Neo4j to bring in the agility of graph models. Based on the principles of graph data structure available at http://en.wikipedia.org/wiki/Graph_(abstract_data_type), Neo4j implements the property graph data model at storage level, which is efficient, flexible, adaptive, and capable of effectively storing/representing any kind of data in the form of nodes, properties, and relationships. Neo4j not only implements the property graph model, but has also evolved the traditional model and added the feature of tagging nodes with labels, which is now referred to as the labeled property graph. Essentially, in Neo4j, everything needs to be defined in either of the following forms: Nodes: A node is the fundamental unit of a graph, which can also be viewed as an entity, but based on a domain model, it can be something else too. Relationships: These defines the connection between two nodes. Relationships also have types, which further differentiate relationships from one another. Properties: Properties are viewed as attributes and do not have their own existence. They are related either to nodes or to relationships. Nodes and relationships can both have their own set of properties. Labels: Labels are used to construct a group of nodes into sets. Nodes that are labeled with the same label belong to the same set, which can be further used to create indexes for faster retrieval, mark temporary states of certain nodes, and there could be many more, based on the domain model. Let's see how all of these four forms are related to each other and represented within Neo4j. A graph essentially consists of nodes, which can also have properties. Nodes are linked to other nodes. The link between two nodes is known as a relationship, which also can have properties. Nodes can also have labels, which are used for grouping the nodes. Let's take up a use case to understand data modeling in Neo4j. John is a male and his age is 24. He is married to a female named Mary whose age is 20. John and Mary got married in 2012. Now, let's develop the data model for the preceding use case in Neo4j: John and Mary are two different nodes. Marriage is the relationship between John and Mary. Age of John, age of Mary, and the year of their marriage become the properties. Male and Female become the labels. Easy, simple, flexible, and natural… isn't it? The data structure in Neo4j is adaptive and effectively can model everything that is not fixed and evolves over a period of time. The next step in data modeling is fetching the data from the data model, which is done through traversals. Traversals are another important aspect of graphs, where you need to follow paths within the graph starting from a given node and then following its relationships with other nodes. Neo4j provides two kinds of traversals: breadth first available at http://en.wikipedia.org/wiki/Breadth-first_search and depth first available at http://en.wikipedia.org/wiki/Depth-first_search. If you are from the RDBMS world, then you must now be wondering, "What about the schema?" and you will be surprised to know that Neo4j is a schemaless or schema-optional graph database. We do not have to define the schema unless we are at a stage where we want to provide some structure to our data for performance gains. Once performance becomes a focus area, then you can define a schema and create indexes/constraints/rules over data. Unlike the traditional models where we freeze requirements and then draw our models, Neo4j embraces data modeling in an agile way so that it can be evolved over a period of time and is highly responsive to the dynamic and changing business requirements. Read-only Cypher queries In this section, we will discuss one of the most important aspects of Neo4j, that is, read-only Cypher queries. Read-only Cypher queries are not only the core component of Cypher but also help us in exploring and leveraging various patterns and pattern matching constructs. It either begins with MATCH, OPTIONAL MATCH, or START, which can be used in conjunction with the WHERE clause and further followed by WITH and ends with RETURN. Constructs such as ORDER BY, SKIP, and LIMIT can also be used with WITH and RETURN. We will discuss in detail about read-only constructs, but before that, let's create a sample dataset and then we will discuss constructs/syntax of read-only Cypher queries with illustration. Creating a sample dataset – movie dataset Let's perform the following steps to clean up our Neo4j database and insert some data which will help us in exploring various constructs of Cypher queries: Open your Command Prompt or Linux shell and open the Neo4j shell by typing <$NEO4J_HOME>/bin/neo4j-shell. Execute the following commands on your Neo4j shell for cleaning all the previous data: //Delete all relationships between Nodes MATCH ()-[r]-() delete r; //Delete all Nodes MATCH (n) delete n; Now we will create a sample dataset, which will contain movies, artists, directors, and their associations. Execute the following set of Cypher queries in your Neo4j shell to create the list of movies: CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'The Karate Kid II', Year : '1986'}); Execute the following set of Cypher queries in your Neo4j shell to create the list of artists: CREATE (:Artist {Name : 'Sylvester Stallone', WorkedAs : ["Actor", "Director"]}); CREATE (:Artist {Name : 'John G. Avildsen', WorkedAs : ["Director"]}); CREATE (:Artist {Name : 'Ralph Macchio', WorkedAs : ["Actor"]}); CREATE (:Artist {Name : 'Simon West', WorkedAs : ["Director"]}); Execute the following set of cypher queries in your Neo4j shell to create the relationships between artists and movies: Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Simon West"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:DIRECTED]->(movie); Next, browse your data through the Neo4j browser. Click on Get some data from the left navigation pane and then execute the query by clicking on the right arrow sign that will appear on the extreme right corner just below the browser navigation bar, and it should look something like this: Now, let's understand the different pieces of read-only queries and execute those against our movie dataset. Working with the MATCH clause MATCH is the most important clause used to fetch data from the database. It accepts a pattern, which defines "What to search?" and "From where to search?". If the latter is not provided, then Cypher will scan the whole tree and use indexes (if defined) in order to make searching faster and performance more efficient. Working with nodes Let's start asking questions from our movie dataset and then form Cypher queries, execute them on <$NEO4J_HOME>/bin/neo4j-shell against the movie dataset and get the results that will produce answers to our questions: How do we get all nodes and their properties? Answer: MATCH (n) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes in a variable n and then return the results, which in our case will be printed on our Neo4j shell. How do we get nodes with specific properties or labels? Answer: Match with label, MATCH (n:Artist) RETURN n; or MATCH (n:Movies) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, which contain the value of label as Artist or Movies. Answer: Match with a specific property MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes that contain the value of label as Artist and the value of property WorkedAs is ["Actor"]. Since we have defined the WorkedAs collection, we need to use square brackets, but in all other cases, we should not use square brackets. We can also return specific columns (similar to SQL). For example, the preceding statement can also be formed as MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n.name as Name;. Working with relationships Let's understand the process of defining relationships in the form of Cypher queries in the same way as you did in the previous section while working with nodes: How do we get nodes that are associated or have relationships with other nodes? Answer: MATCH (n)-[r]-(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes with which they have relationships in variables n, r, and n1 and then further return/print the results on your Neo4j shell. Also, in the preceding query, we have used - and not -> as we do not care about the direction of relationships that we retrieve. How do we get nodes, their associated properties that have some specific type of relationship, or the specific property of a relationship? Answer: MATCH (n)-[r:ACTED_IN {Role : "Rocky Balboa"}]->(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes, which have a relationship as ACTED_IN and with the property of Role as Rocky Balboa. Also, in the preceding query, we do care about the direction (incoming/outgoing) of a relationship, so we are using ->. For matching multiple relations replace [r:ACTED_IN] with [r:ACTED_IN | DIRECTED] and use single quotes or escape characters wherever there are special characters in the name of relationships. How do we get a coartist? Answer: MATCH (n {Name : "Sylvester Stallone"})-[r]->(x)<-[r1]-(n1) return n.Name as Artist,type(r),x.Title as Movie, type(r1), n1.Name as Artist2; Explanation: We are trying to find out all artists that are related to Sylvester Stallone in some manner or the other. Once you run the preceding query, you will see something like the following image, which should be self-explanatory. Also, see the usage of as and type. as is similar to the SQL construct and is used to define a meaningful name to the column presenting the results, and type is a special keyword that gives the type of relationship between two nodes. How do we get the path and number of hops between two nodes? Answer: MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-(:Movie{Title:"Rocky V"}) return p; Explanation: Paths are the distance between two nodes. In the preceding statement, we are trying to find out all paths between two nodes, which are between 0 (minimum) and 4 (maximum) hops away from each other and are only connected through the relationship DIRECTED. You can also find the path, originating only from a particular node and re-write your query as MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-() return p;. Integration of the BI tool – QlikView In this section, we will talk about the integration of the BI tool—QlikView with Neo4j. QlikView is available only on the Windows platform, so this section is only applicable for Windows users. Neo4j as an open source database exposes its core APIs for developers to write plugins and extends its intrinsic capabilities. Neo4j JDBC is one such plugin that enables the integration of Neo4j with various BI / visualization and ETL tools such as QlikView, Jaspersoft, Talend, Hive, Hbase, and many more. Let's perform the following steps for integrating Neo4j with QlikView on Windows: Download, install, and configure the following required software: Download the Neo4j JDBC driver directly from https://github.com/neo4j-contrib/neo4j-jdbc as the source code. You need to compile and create a JAR file or you can also directly download the compiled sources from http://dist.neo4j.org/neo4j-jdbc/neo4j-jdbc-2.0.1-SNAPSHOT-jar-with-dependencies.jar. Depending upon your Windows platform (32 bit or 64 bit), download the QlikView Desktop Personal Edition. In this article, we will be using QlikView Desktop Personal Edition 11.20.12577.0 SR8. Install QlikView and follow the instructions as they appear on your screen. After installation, you will see the QlikView icon in your Windows start menu. QlikView leverages QlikView JDBC Connector for integration with JDBC data sources, so our next step would be to install QlikView JDBC Connector. Let's perform the following steps to install and configure the QlikView JDBC Connector: Download QlikView JDBC Connector either from http://www.tiq-solutions.de/display/enghome/ENJDBC or from https://s3-eu-west-1.amazonaws.com/tiq-solutions/JDBCConnector/JDBCConnector_Setup.zip. Open the downloaded JDBCConnector_Setup.zip file and install the provided connector. Once the installation is complete, open JDBC Connector from your Windows Start menu and click on Active for a 30-day trial (if you haven't already done so during installation). Create a new profile of the name Qlikview-Neo4j and make it your default profile. Open the JVM VM Options tab and provide the location of jvm.dll, which can be located at <$JAVA_HOME>/jre/bin/client/jvm.dll. Click on Open Log-Folder to check the logs related to the Database connections. You can configure the Logging Level and also define the JVM runtime options such as -Xmx and -Xms in the textbox provided for Option in the preceding screenshot. Browse through the JDBC Driver tab, click on Add Library, and provide the location of your <$NEO4J-JDBC driver>.jar, and add the dependent JAR files. Instead of adding individual libraries, we can also add a folder containing the same list of libraries by clicking on the Add Folder option. We can also use non JDBC-4 compliant drivers by mentioning the name of the driver class in the Advanced Settings tab. There is no need to do that, however, if you are setting up a configuration profile that uses a JDBC-4 compliant driver. Open the License Registration tab and request Online Trial license, which will be valid for 30 days. Assuming that you are connected to the Internet, the trial license will be applied immediately. Save your settings and close QlikView JDBC Connector configuration. Open <$NEO4J_HOME>binneo4jshell and execute the following set of Cypher statements one by one to create sample data in your Neo4j database; then, in further steps, we will visualize this data in QlikView: CREATE (movies1:Movies {Title : 'Rocky', Year : '1976'}); CREATE (movies2:Movies {Title : 'Rocky II', Year : '1979'}); CREATE (movies3:Movies {Title : 'Rocky III', Year : '1982'}); CREATE (movies4:Movies {Title : 'Rocky IV', Year : '1985'}); CREATE (movies5:Movies {Title : 'Rocky V', Year : '1990'}); CREATE (movies6:Movies {Title : 'The Expendables', Year : '2010'}); CREATE (movies7:Movies {Title : 'The Expendables II', Year : '2012'}); CREATE (movies8:Movies {Title : 'The Karate Kid', Year : '1984'}); CREATE (movies9:Movies {Title : 'The Karate Kid II', Year : '1986'}); Open the QlikView Desktop Personal Edition and create a new view by navigating to File | New. The Getting Started wizard may appear as we are creating a new view. Close this wizard. Navigate to File | EditScript and change your database to JDBCConnector.dll (32). In the same window, click on Connect and enter "jdbc:neo4j://localhost:7474/" in the "url" box. Leave the username and password as empty and click on OK. You will see that a CUSTOM CONNECT TO statement is added in the box provided. Next insert the highlighted Cypher statements in the provided window just below the CUSTOM CONNECT TO statement. Save the script and close the EditScript window. Now, on your Qlikviewsheet, execute the script by pressing Ctrl + R on our keyboard. Next, add a new TableObject on your Qlikviewsheet, select "MovieTitle" from the provided fields and click on OK. And we are done!!!! You will see the data appearing in the listbox in the newly created Table Object. The data is fetched from the Neo4j database and QlikView is used to render this data. The same process is used for connecting to other JDBC-compliant BI / visualization / ETL tools such as Jasper, Talend, Hive, Hbase, and so on. We just need to define appropriate JDBC Type-4 drivers in JDBC Connector. We can also use ODBC-JDBC Bridge provided by EasySoft at http://www.easysoft.com/products/data_access/odbc_jdbc_gateway/index.html. EasySoft provides the ODBC-JDBC Gateway, which facilitates ODBC access from applications such as MS Access, MS Excel, Delphi, and C++ to Java databases. It is a fully functional ODBC 3.5 driver that allows you to access any JDBC data source from any ODBC-compatible application. Summary In this article, you have learned the basic concepts of data modeling in Neo4j and have walked you through the process of BI integration with Neo4j. Resources for Article: Further resources on this subject: Working Neo4j Embedded Database [article] Introducing Kafka [article] First Steps With R [article]
Read more
  • 0
  • 0
  • 2157
Visually different images

article-image-spark-programming-model
Packt
20 Feb 2015
13 min read
Save for later

The Spark Programming Model

Packt
20 Feb 2015
13 min read
In this article by Nick Pentreath, author of the book Machine Learning with Spark, we will delve into a high-level overview of Spark's design, we will introduce the SparkContext object as well as the Spark shell, which we will use to interactively explore the basics of the Spark programming model. While this section provides a brief overview and examples of using Spark, we recommend that you read the following documentation to get a detailed understanding:Spark Quick Start: http://spark.apache.org/docs/latest/quick-start.htmlSpark Programming guide, which covers Scala, Java, and Python: http://spark.apache.org/docs/latest/programming-guide.html (For more resources related to this topic, see here.) SparkContext and SparkConf The starting point of writing any Spark program is SparkContext (or JavaSparkContext in Java). SparkContext is initialized with an instance of a SparkConf object, which contains various Spark cluster-configuration settings (for example, the URL of the master node). Once initialized, we will use the various methods found in the SparkContext object to create and manipulate distributed datasets and shared variables. The Spark shell (in both Scala and Python, which is unfortunately not supported in Java) takes care of this context initialization for us, but the following lines of code show an example of creating a context running in the local mode in Scala: val conf = new SparkConf().setAppName("Test Spark App").setMaster("local[4]")val sc = new SparkContext(conf) This creates a context running in the local mode with four threads, with the name of the application set to Test Spark App. If we wish to use default configuration values, we could also call the following simple constructor for our SparkContext object, which works in exactly the same way: val sc = new SparkContext("local[4]", "Test Spark App") The Spark shell Spark supports writing programs interactively using either the Scala or Python REPL (that is, the Read-Eval-Print-Loop, or interactive shell). The shell provides instant feedback as we enter code, as this code is immediately evaluated. In the Scala shell, the return result and type is also displayed after a piece of code is run. To use the Spark shell with Scala, simply run ./bin/spark-shell from the Spark base directory. This will launch the Scala shell and initialize SparkContext, which is available to us as the Scala value, sc. Your console output should look similar to the following screenshot: To use the Python shell with Spark, simply run the ./bin/pyspark command. Like the Scala shell, the Python SparkContext object should be available as the Python variable sc. You should see an output similar to the one shown in this screenshot: Resilient Distributed Datasets The core of Spark is a concept called the Resilient Distributed Dataset (RDD). An RDD is a collection of "records" (strictly speaking, objects of some type) that is distributed or partitioned across many nodes in a cluster (for the purposes of the Spark local mode, the single multithreaded process can be thought of in the same way). An RDD in Spark is fault-tolerant; this means that if a given node or task fails (for some reason other than erroneous user code, such as hardware failure, loss of communication, and so on), the RDD can be reconstructed automatically on the remaining nodes and the job will still complete. Creating RDDs RDDs can be created from existing collections, for example, in the Scala Spark shell that you launched earlier: val collection = List("a", "b", "c", "d", "e")val rddFromCollection = sc.parallelize(collection) RDDs can also be created from Hadoop-based input sources, including the local filesystem, HDFS, and Amazon S3. A Hadoop-based RDD can utilize any input format that implements the Hadoop InputFormat interface, including text files, other standard Hadoop formats, HBase, Cassandra, and many more. The following code is an example of creating an RDD from a text file located on the local filesystem: val rddFromTextFile = sc.textFile("LICENSE") The preceding textFile method returns an RDD where each record is a String object that represents one line of the text file. Spark operations Once we have created an RDD, we have a distributed collection of records that we can manipulate. In Spark's programming model, operations are split into transformations and actions. Generally speaking, a transformation operation applies some function to all the records in the dataset, changing the records in some way. An action typically runs some computation or aggregation operation and returns the result to the driver program where SparkContext is running. Spark operations are functional in style. For programmers familiar with functional programming in Scala or Python, these operations should seem natural. For those without experience in functional programming, don't worry; the Spark API is relatively easy to learn. One of the most common transformations that you will use in Spark programs is the map operator. This applies a function to each record of an RDD, thus mapping the input to some new output. For example, the following code fragment takes the RDD we created from a local text file and applies the size function to each record in the RDD. Remember that we created an RDD of Strings. Using map, we can transform each string to an integer, thus returning an RDD of Ints: val intsFromStringsRDD = rddFromTextFile.map(line => line.size) You should see output similar to the following line in your shell; this indicates the type of the RDD: intsFromStringsRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[5] at map at <console>:14 In the preceding code, we saw the => syntax used. This is the Scala syntax for an anonymous function, which is a function that is not a named method (that is, one defined using the def keyword in Scala or Python, for example). The line => line.size syntax means that we are applying a function where the input variable is to the left of the => operator, and the output is the result of the code to the right of the => operator. In this case, the input is line, and the output is the result of calling line.size. In Scala, this function that maps a string to an integer is expressed as String => Int.This syntax saves us from having to separately define functions every time we use methods such as map; this is useful when the function is simple and will only be used once, as in this example. Now, we can apply a common action operation, count, to return the number of records in our RDD: intsFromStringsRDD.count The result should look something like the following console output: 14/01/29 23:28:28 INFO SparkContext: Starting job: count at <console>:17...14/01/29 23:28:28 INFO SparkContext: Job finished: count at <console>:17, took 0.019227 sres4: Long = 398 Perhaps we want to find the average length of each line in this text file. We can first use the sum function to add up all the lengths of all the records and then divide the sum by the number of records: val sumOfRecords = intsFromStringsRDD.sumval numRecords = intsFromStringsRDD.countval aveLengthOfRecord = sumOfRecords / numRecords The result will be as follows: aveLengthOfRecord: Double = 52.06030150753769 Spark operations, in most cases, return a new RDD, with the exception of most actions, which return the result of a computation (such as Long for count and Double for sum in the preceding example). This means that we can naturally chain together operations to make our program flow more concise and expressive. For example, the same result as the one in the preceding line of code can be achieved using the following code: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count An important point to note is that Spark transformations are lazy. That is, invoking a transformation on an RDD does not immediately trigger a computation. Instead, transformations are chained together and are effectively only computed when an action is called. This allows Spark to be more efficient by only returning results to the driver when necessary so that the majority of operations are performed in parallel on the cluster. This means that if your Spark program never uses an action operation, it will never trigger an actual computation, and you will not get any results. For example, the following code will simply return a new RDD that represents the chain of transformations: val transformedRDD = rddFromTextFile.map(line => line.size).filter(size => size > 10).map(size => size * 2) This returns the following result in the console: transformedRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[8] at map at <console>:14 Notice that no actual computation happens and no result is returned. If we now call an action, such as sum, on the resulting RDD, the computation will be triggered: val computation = transformedRDD.sum You will now see that a Spark job is run, and it results in the following console output: ...14/11/27 21:48:21 INFO SparkContext: Job finished: sum at <console>:16, took 0.193513 scomputation: Double = 60468.0 The complete list of transformations and actions possible on RDDs as well as a set of more detailed examples are available in the Spark programming guide (located at http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations), and the API documentation (the Scala API documentation) is located at http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.RDD). Caching RDDs One of the most powerful features of Spark is the ability to cache data in memory across a cluster. This is achieved through use of the cache method on an RDD: rddFromTextFile.cache Calling cache on an RDD tells Spark that the RDD should be kept in memory. The first time an action is called on the RDD that initiates a computation, the data is read from its source and put into memory. Hence, the first time such an operation is called, the time it takes to run the task is partly dependent on the time it takes to read the data from the input source. However, when the data is accessed the next time (for example, in subsequent queries in analytics or iterations in a machine learning model), the data can be read directly from memory, thus avoiding expensive I/O operations and speeding up the computation, in many cases, by a significant factor. If we now call the count or sum function on our cached RDD, we will see that the RDD is loaded into memory: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count Indeed, in the following output, we see that the dataset was cached in memory on the first call, taking up approximately 62 KB and leaving us with around 270 MB of memory free: ...14/01/30 06:59:27 INFO MemoryStore: ensureFreeSpace(63454) called with curMem=32960, maxMem=31138775014/01/30 06:59:27 INFO MemoryStore: Block rdd_2_0 stored as values to memory (estimated size 62.0 KB, free 296.9 MB)14/01/30 06:59:27 INFO BlockManagerMasterActor$BlockManagerInfo:Added rdd_2_0 in memory on 10.0.0.3:55089 (size: 62.0 KB, free: 296.9 MB)... Now, we will call the same function again: val aveLengthOfRecordChainedFromCached = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count We will see from the console output that the cached data is read directly from memory: ...14/01/30 06:59:34 INFO BlockManager: Found block rdd_2_0 locally... Spark also allows more fine-grained control over caching behavior. You can use the persist method to specify what approach Spark uses to cache data. More information on RDD caching can be found here: http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence. Broadcast variables and accumulators Another core feature of Spark is the ability to create two special types of variables: broadcast variables and accumulators. A broadcast variable is a read-only variable that is made available from the driver program that runs the SparkContext object to the nodes that will execute the computation. This is very useful in applications that need to make the same data available to the worker nodes in an efficient manner, such as machine learning algorithms. Spark makes creating broadcast variables as simple as calling a method on SparkContext as follows: val broadcastAList = sc.broadcast(List("a", "b", "c", "d", "e")) The console output shows that the broadcast variable was stored in memory, taking up approximately 488 bytes, and it also shows that we still have 270 MB available to us: 14/01/30 07:13:32 INFO MemoryStore: ensureFreeSpace(488) called with curMem=96414, maxMem=31138775014/01/30 07:13:32 INFO MemoryStore: Block broadcast_1 stored as values to memory(estimated size 488.0 B, free 296.9 MB)broadCastAList: org.apache.spark.broadcast.Broadcast[List[String]] = Broadcast(1) A broadcast variable can be accessed from nodes other than the driver program that created it (that is, the worker nodes) by calling value on the variable: sc.parallelize(List("1", "2", "3")).map(x => broadcastAList.value ++ x).collect This code creates a new RDD with three records from a collection (in this case, a Scala List) of ("1", "2", "3"). In the map function, it returns a new collection with the relevant record from our new RDD appended to the broadcastAList that is our broadcast variable. Notice that we used the collect method in the preceding code. This is a Spark action that returns the entire RDD to the driver as a Scala (or Python or Java) collection. We will often use collect when we wish to apply further processing to our results locally within the driver program. Note that collect should generally only be used in cases where we really want to return the full result set to the driver and perform further processing. If we try to call collect on a very large dataset, we might run out of memory on the driver and crash our program.It is preferable to perform as much heavy-duty processing on our Spark cluster as possible, preventing the driver from becoming a bottleneck. In many cases, however, collecting results to the driver is necessary, such as during iterations in many machine learning models. On inspecting the result, we will see that for each of the three records in our new RDD, we now have a record that is our original broadcasted List, with the new element appended to it (that is, there is now either "1", "2", or "3" at the end): ...14/01/31 10:15:39 INFO SparkContext: Job finished: collect at <console>:15, took 0.025806 sres6: Array[List[Any]] = Array(List(a, b, c, d, e, 1), List(a, b, c, d, e, 2), List(a, b, c, d, e, 3)) An accumulator is also a variable that is broadcasted to the worker nodes. The key difference between a broadcast variable and an accumulator is that while the broadcast variable is read-only, the accumulator can be added to. There are limitations to this, that is, in particular, the addition must be an associative operation so that the global accumulated value can be correctly computed in parallel and returned to the driver program. Each worker node can only access and add to its own local accumulator value, and only the driver program can access the global value. Accumulators are also accessed within the Spark code using the value method. For more details on broadcast variables and accumulators, see the Shared Variables section of the Spark Programming Guide: http://spark.apache.org/docs/latest/programming-guide.html#shared-variables. Summary In this article, we learned the basics of Spark's programming model and API using the interactive Scala console. Resources for Article: Further resources on this subject: Ridge Regression [article] Clustering with K-Means [article] Machine Learning Examples Applicable to Businesses [article]
Read more
  • 0
  • 0
  • 3632

article-image-visualize
Packt
19 Feb 2015
17 min read
Save for later

Visualize This!

Packt
19 Feb 2015
17 min read
This article is written by Michael Phillips, the author of the book TIBCO Spotfire: A Comprehensive Primer, discusses that human beings are fundamentally visual in the way they process information. The invention of writing was as much about visually representing our thoughts to others as it was about record keeping and accountancy. In the modern world, we are bombarded with formalized visual representations of information, from the ubiquitous opinion poll pie chart to clever and sophisticated infographics. The website http://data-art.net/resources/history_of_vis.php provides an informative and entertaining quick history of data visualization. If you want a truly breathtaking demonstration of the power of data visualization, seek out Hans Rosling's The best stats you've ever seen at http://ted.com. (For more resources related to this topic, see here.) We will spend time getting to know some of Spotfire's data capabilities. It's important that you continue to think about data; how it's structured, how it's related, and where it comes from. Building good visualizations requires visual imagination, but it also requires data literacy. This article is all about getting you to think about the visualization of information and empowering you to use Spotfire to do so. Apart from learning the basic features and properties of the various Spotfire visualization types, there is much more to learn about the seamless interactivity that Spotfire allows you to build in to your analyses. We will be taking a close look at 7 of the 16 visualization types provided by Spotfire, but these 7 visualization types are the most commonly used. We will cover the following topics: Displaying information quickly in tabular form Enriching your visualizations with color categorization Visualizing categorical information using bar charts Dividing a visualization across a trellis grid Key Spotfire concept—marking Visualizing trends using line charts Visualizing proportions using pie charts Visualizing relationships using scatter plots Visualizing hierarchical relationships using treemaps Key Spotfire concept—filters Enhancing tabular presentations using graphical tables Now let's have some fun! Displaying information quickly in tabular form While working through the data examples, we used the Spotfire Table visualization, but now we're going to take a closer look. People will nearly always want to see the "underlying data", the details behind any visualization you create. The Table visualization meets this need. It's very important not to confuse a table in the general data sense with the Spotfire Table visualization; the underlying data table remains immutable and complete in the background. The Table visualization is a highly manipulatable view of the underlying data table and should be treated as a visualization, not a data table. The data used here is BaseballPlayerData.xls There is always more than one way to do the same thing in Spotfire, and this is particularly true for the manipulation of visualizations. Let's start with some very quick manipulations: First, insert a table visualization by going to the Insert menu, selecting New Visualization, and then Table. To move a column, left-click on the column name, hold, and drag it. To sort by a column, left-click on the column name. To sort by more than one column, left-click on the first column name and then press Shift + left-click on the subsequent columns in order of sort precedence. To widen or narrow a column, hover the mouse over the right-hand edge of the column title until you see the cursor change to a two-way arrow, and then click and drag it. These and other properties of the Table visualization are also accessed via visualization properties. As you work through the various Spotfire visualizations, you'll notice that some types have more options than others, but there are common trends and an overall consistency in conventions. Visualization properties can be opened in a number of ways: By right-clicking on the visualization, a table in this case, and selecting Properties. By going to the Edit menu and selecting Visualization Properties. By clicking on the Visualization Properties icon, as shown in the following screenshot, in the icon tray below the main menu bar. It's beyond the scope of this book to explore every property and option. The context-sensitive help provided by Spotfire is excellent and explains all the options in glorious detail. I'd like to highlight four important properties of the Table visualization: The General property allows you to change the table visualization title, not the name of the underlying data table. It also allows you to hide the title altogether. The Data property allows you to switch the underlying data table, if you have more than one table loaded into your analysis. The Columns property allows you to hide columns and order the columns you do want to show. The Show/Hide Items property allows you to limit what is shown by a rule you define, such as top five hitters. After clicking on the Add button, you select the relevant column from a dropdown list, choose Rule type (Top), and finally, choose Value for the rule (5). The resulting visualization will only show the rows of data that meet the rule you defined. Enriching your visualizations with color categorization Color is a strong feature in Spotfire and an important visualization tool, often underestimated by report creators. It can be seen as merely a nice-to-have customization, but paying attention to color can be the difference between creating a stimulating and intuitive data visualization rather than an uninspiring and even confusing corporate report. Take some pride and care in the visual aesthetics of your analytics creations! Let's take a look at the color properties of the Table visualization. Open the Table visualization properties again, select Colors, and then Add the column Runs. Now, you can play with a color gradient, adding points by clicking on the Add Point button and customizing the colors. It's as easy as left-clicking on any color box and then selecting from a prebuilt palette or going into a full RGB selection dialog by choosing More Colors…. The result is a heatmap type effect for runs scored, with yellow representing low run totals, transitioning to green as the run total approaches the average value in the data, and becoming blue for the highest run totals. Visualizing categorical information using bar charts We saw how the Table visualization is perfect for showing and ordering detailed information. It's quite similar to a spreadsheet. The Bar Chart visualization is very good for visualizing categorical information, that is, where you have categories with supporting hard numbers—sales by region, for example. The region is the category, whereas the sales is the hard number or fact. Bar charts are typically used to show a distribution. Depending on your data or your analytic requirement, the bars can be ordered by value, placed side by side, stacked on top of each other, or arranged vertically or horizontally. There is a special case of the category and value combination and that is where you want to plot the frequencies of a set of numerical values. This type of bar chart is referred to as a histogram, and although it is number against number, it is still, in essence, a distribution plot. It is very common in fact to transform the continuous number range in such cases into a set of discrete bins or categories for the plot. For example, you could take some demographic data and plot age as the category and the number of people at that age as the value (the frequency) on a bar chart. The result, for a general population, would approach a bell-shaped curve. Let's create a bar chart using the baseball data. The data we will use is BaseballPlayerData.xls, which you can download from http://www.insidespotfire.com. Create a new page by right-clicking on any page tab and selecting New Page. You can also select New Page from the Insert menu or click on the new page icon in the icon bar below the main menu. Create a Bar Chart visualization by left-clicking on the bar chart icon or by selecting New Visualization and then Bar Chart from the Insert menu. Spotfire will automatically create a default chart, that is, rarely exactly what you want, so the next step is to configure the chart. Two distributions might be interesting to look at: the distribution of home runs across all the teams and the distribution of player salaries across all the teams. The axes are easy to change; simply use the axes selectors.   If the bars are vertical, it means that the category—Team, in our case—should be on the horizontal axis, with the value—Home Runs or Salary—on the vertical axis, representing the height of the bars.   We're going to pick Home Runs from the vertical axis selector and then an appropriate aggregation dropdown, which is highlighted in red in the screenshot. Sum would be a valid option, but let's go with Avg (Average). Similarly, select Team from the horizontal axis dropdown selector. The vertical, or value, axis must be an aggregation because there is more than one home run value for each category. You must decide if you want a sum, an average, a minimum, and so on. You can modify the visualization properties just as you did for the Table visualization. Some of the options are the same; some are specific to the bar chart. We're going to select the Sort bars by value option in the Appearance property. This will order the bars in descending order of value. We're also going to check the option Vertically under Scale labels | Show labels for the Category Axis property. There are two more actions to perform: create an identical bar chart except with average salary as the value axis, and give each bar chart an appropriate title (Visualization Properties|General|Title:). To copy an existing visualization, simply right-click on it and select Duplicate Visualization. We can now compare the distribution of home run average and salary average across all the baseball teams, but there's a better way to do this in a single visualization using color. Close the salary distribution bar chart by left-clicking on X in the upper right-hand corner of the visualization (X appears when you hover the mouse) or right-clicking on the visualization and selecting Close. Now, open the home run bar chart visualization properties, go to the Colors property, and color by Avg(Salary). Select a Gradient color mode, and add a median point by clicking on the Add Point button and selecting Median from the dropdown list of options on the added point. Finally, choose a suitable heat map range of colors; something like blue (min) through pale yellow (median) through red (max). You will still see the distribution of home runs across the baseball teams, but now you will have a superimposed salary heat map. Texas and Cleveland appear to be getting much more bang for their buck than the NY Yankees. Dividing a visualization across a trellis grid Trellising, whereby you divide a series of visualizations into individual panels, is a useful technique when you want to subdivide your analysis. In the example we've been working with, we might, for instance, want to split the visualization by league. Open the visualization properties for the home runs distribution bar chart colored by salary and select the Trellis property. Go to Panels and split by League (use the dropdown column selector). Spotfire allows you to build layers of information with even basic visualizations such as the bar chart. In one chart, we see the home run distribution by team, salary distribution by team, and breakdown by league. Key Spotfire concept – marking It's time to introduce one of the most important Spotfire concepts, called marking, which is central to the interactivity that makes Spotfire such a powerful analysis tool. Marking refers to the action of selecting data in a visualization. Every element you see is selectable, or markable, that is, a single row or multiple rows in a table, a single bar or multiple bars in a bar chart. You need to understand two aspects to marking. First, there is the visual effect, or color(s) you see, when you mark (select) visualization elements. Second, there is the behavior that follows marking: what happens to data and the display of data when you mark something. How to change the marking color From Spotfire v5.5 onward, you can choose, on a visualization-by-visualization basis, two distinct visual effects for marking: Use a separate color for marked items: all marked items are uniformly colored with the marking color, and all unmarked items retain their existing color. Keep existing color attributes and fade out unmarked items: all marked items keep their existing color, and all unmarked items also keep their existing color but with a high degree of color fade applied, leaving the marked items strongly highlighted. The second option is not available in versions older than v5.5 but is the default option in Versions 5.5 onward. The setting is made in the visualization's Appearance property by checking or unchecking the option Use separate color for marked items. The default color when using a separate color for marked items is dark green, but this can be changed by going to Edit|Document Properties|Markings|Edit. The new option has the advantage of retaining any underlying coloring you defined, but you might not like how the rest of the chart is washed out. Which approach you choose depends on what information you think is critical for your particular situation. When you create a new analysis, a default marking is created and applied to every visualization you create by default. You can change the color of the marking in Document Properties, which is found in the Edit menu. Just open Document Properties, click on the Markings tab, select the marking, click on the Edit button, and change the color. You can also create as many markings as you need, giving them convenient names for reference purposes, but we'll just focus on using one for now. How to set the marking behavior of a visualization Marking behavior depends fundamentally on data relationships. The data within a single data table is intrinsically related; the data in separate data tables must be explicitly related before you configure marking behavior for visualizations based on separate datasets. When you mark something in a visualization, five things can happen depending on the data involved and how you configured your visualizations: Conditions Behavior Two visualizations with the same underlying data table (they can be on different pages in the analysis file) and the same marking scheme applied. Marking data on one visualization will automatically mark the same data on the other. Two visualizations with related underlying data tables and the same marking scheme applied. The same as the previous condition's behavior, but subject to differences in data granularity. For example, marking a baseball team in one visualization will mark all the team's players in another visualization that is based on a more detailed table related by team. Two visualizations with the same or related data tables where one has been configured with data dependency on the marking in the other. Nothing will display in the marking-dependent visualization other than what is marked in the reference visualization. Visualizations with unrelated underlying data tables. No marking interaction will occur, and the visualizations will mark completely independently of one another. Two visualizations with the same underlying data table or related data tables and with different marking schemes applied. Marking data on one visualization will not show on the other because the marking schemes are different. Here's how we set these behaviors: Open the visualization properties of the bar chart we have been working with and navigate to the Data property.   You'll notice that two settings refer to marking: Marking and Limit data using markings. Use the dropdown under Marking to select the marking to be used for the visualization. Having no marking is an option. Visualizations with the same marking will display synchronous selection, subject to the data relation conditions described earlier. The options under Limit data using markings determine how the visualization will be limited to marking elsewhere in the analysis. The default here is no dependency. If you select a marking, then the visualization will only display data selected elsewhere with that marking. It's not good to have the same marking for Marking and Limit data using markings. If you are using the limit data setting, select no marking, or create a second marking and select it under Marking. You're possibly a bit confused by now. Fortunately, marking is much harder to describe than to use! Let's build a tangible example. We'll start a new analysis, so close any analysis you have open and create a new one, loading the player-level baseball data (BaseballPlayerData.xls). Add two bar charts and a table. You can rearrange the layout by left-clicking on the title bar of a visualization, holding, and dragging it. Position the visualizations any way you wish, but you can place the two bar charts side by side, with the table below them spanning both. Save your analysis file at this point and at regular intervals. It's good behavior to save regularly as you build an analysis. It will save you a lot of grief if your PC fails in any way. There is no autosave function in Spotfire. For the first bar chart, set the following visualization properties: Property Value General | Title Home Runs Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Vertical bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Avg(Home Runs) Colors | Columns Avg(Salary) Colors | Color mode Gradient Add Point for median Max = strong red; Median = pale yellow; Min = strong blue Labels | Show labels for Marked Rows Labels | Types of labels | Complete bar Check For the second bar chart, set the following visualization properties: Property Value General | Title Roster Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Horizontal bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Count(Player Name) Colors | Columns Position Colors | Color mode Categorical For the table, set the following visualization properties: Property Value General | Title Details Data | Marking (None) Data | Limit data using markings Check Marking   Columns Team, Player Name, Games Played, Home Runs, Salary, Position Now start selecting visualization elements with your mouse. You can click on elements such as bars or segments of bars, or you can click and drag a rectangular block around multiple elements. When you select a bar on the Home Runs bar chart, the corresponding team bar automatically selects the Roster bar chart, and details for all the players in that team display in the Details table. When you select a bar segment on the Roster bar chart, the corresponding team bar automatically selects on the Home Runs bar chart and only players in the selected position for the team selected appear in the details. There are some very useful additional functions associated with marking, and you can access these by right-clicking on a marked item. They are Unmark, Invert, Delete, Filter To, and Filer Out. You can also unmark by left-clicking on any blank space in the visualization. Play with this analysis file until you are comfortable with the marking concept and functionality. Summary This article is a small taste of the book TIBCO Spotfire: A comprehensive primer. You've seen how the Table visualization is an easy and traditional way to display detailed information in tabular form and how the Bar Chart visualization is excellent for visualizing categorical information, such as distributions. You've learned how to enrich visualizations with color categorization and how to divide a visualization across a trellis grid. You've also been introduced to the key Spotfire concept of marking. Apart from gaining a functional understanding of these Spotfire concepts and techniques, you should have gained some insight into the science and art of data visualization. Resources for Article: Further resources on this subject: The Spotfire Architecture Overview [article] Interacting with Data for Dashboards [article] Setting Up and Managing E-mails and Batch Processing [article]
Read more
  • 0
  • 0
  • 2647

article-image-using-nlp-apis
Packt
12 Feb 2015
22 min read
Save for later

Using NLP APIs

Packt
12 Feb 2015
22 min read
In this article by Richard M Reese, author of the book, Natural Language Processing with Java, we will demonstrate the NER process using OpenNLP, Stanford API, and LingPipe. They each provide alternate techniques that can often do a good job identifying entities in text. The following declaration will serve as the sample text to demonstrate the APIs: String sentences[] = {"Joe was the last person to see Fred. ", "He saw him in Boston at McKenzie's pub at 3:00 where he paid "    + "$2.45 for an ale. ",    "Joe wanted to go to Vermont for the day to visit a cousin who "    + "works at IBM, but Sally and he had to look for Fred"}; Using OpenNLP for NER We will demonstrate the use of the TokenNameFinderModel class to perform NLP using the OpenNLP API. In addition, we will demonstrate how to determine the probability that the entity identified is correct. The general approach is to convert the text into a series of tokenized sentences, create an instance of the TokenNameFinderModel class using an appropriate model, and then use the find method to identify the entities in the text. The next example demonstrates the use of the TokenNameFinderModel class. We will use a simple sentence initially and then use multiple sentences. The sentence is defined here:    String sentence = "He was the last person to see Fred."; We will use the models found in the en-token.bin and en-ner-person.bin files for the tokenizer and name finder models respectively. InputStream for these files is opened using a try-with-resources block as shown here:    try (InputStream tokenStream = new FileInputStream(            new File(getModelDir(), "en-token.bin"));            InputStream modelStream = new FileInputStream(                new File(getModelDir(), "en-ner-person.bin"));) {        ...      } catch (Exception ex) {        // Handle exceptions    } Within the try block, the TokenizerModel and Tokenizer objects are created:    TokenizerModel tokenModel = new TokenizerModel(tokenStream);    Tokenizer tokenizer = new TokenizerME(tokenModel); Next, an instance of the NameFinderME class is created using the person model:    TokenNameFinderModel entityModel =        new TokenNameFinderModel(modelStream);    NameFinderME nameFinder = new NameFinderME(entityModel); We can now use the tokenize method to tokenize the text and the find method to identify the person in the text. The find method will use the tokenized String array as input and return an array of the Span objects as follows:    String tokens[] = tokenizer.tokenize(sentence);    Span nameSpans[] = nameFinder.find(tokens); The following for statement displays the person found in the sentence. Its positional information and the person are displayed on separate lines:    for (int i = 0; i < nameSpans.length; i++) {        System.out.println("Span: " + nameSpans[i].toString());        System.out.println("Entity: "            + tokens[nameSpans[i].getStart()]);    } The output is as follows: Span: [7..9) person Entity: Fred Often we will work with multiple sentences. To demonstrate this we will use the previously defined sentences string array. The previous for statement is replaced with the following sequence. The tokenize method is invoked against each sentence and then the entity information is displayed as before:    for (String sentence : sentences) {        String tokens[] = tokenizer.tokenize(sentence);        Span nameSpans[] = nameFinder.find(tokens);        for (int i = 0; i < nameSpans.length; i++) {            System.out.println("Span: " + nameSpans[i].toString());            System.out.println("Entity: "                + tokens[nameSpans[i].getStart()]);        }        System.out.println();    } The output is as follows. There is an extra blank line between the two people detected because the second sentence did not contain a person. Span: [0..1) person Entity: Joe Span: [7..9) person Entity: Fred     Span: [0..1) person Entity: Joe Span: [19..20) person Entity: Sally Span: [26..27) person Entity: Fred Determining the accuracy of the entity When TokenNameFinderModel identifies entities in text, it computes a probability for that entity. We can access this information using the probs method as shown next. The method returns an array of doubles, which corresponds to the elements of the nameSpans array:    double[] spanProbs = nameFinder.probs(nameSpans); Add this statement to the previous example immediately after the use of the find method. Then add the next statement at the end of the nested for statement:    System.out.println("Probability: " + spanProbs[i]); When the example is executed, you will get the following output. The probability fields reflect the confidence level of the entity assignment. For the first entity, the model is 80.529 percent confident that Joe is a person: Span: [0..1) person Entity: Joe Probability: 0.8052914774025202 Span: [7..9) person Entity: Fred Probability: 0.9042160889302772     Span: [0..1) person Entity: Joe Probability: 0.9620970782763985 Span: [19..20) person Entity: Sally Probability: 0.964568603518126 Span: [26..27) person Entity: Fred Probability: 0.990383039618594 Using other entity types OpenNLP supports different libraries as listed in the following table. These models can be downloaded from http://opennlp.sourceforge.net/models-1.5/. The prefix, en, specifies English as the language while ner indicates that the model is for NER. English finder models File name Location name finder model en-ner-location.bin Money name finder model en-ner-money.bin Organization name finder model en-ner-organization.bin Percentage name finder model en-ner-percentage.bin Person name finder model en-ner-person.bin Time name finder model en-ner-time.bin If we modify the statement to use a different model file, we can see how they work against the sample sentences:    InputStream modelStream = new FileInputStream(        new File(getModelDir(), "en-ner-time.bin"));) { When the en-ner-money.bin model is used, the index into the tokens array in the earlier code sequence has to be increased by one. Otherwise, all that is returned is the dollar sign. The various outputs are shown in the following table. Model Output en-ner-location.bin Span: [4..5) location Entity: Boston Probability: 0.8656908776583051 Span: [5..6) location Entity: Vermont Probability: 0.9732488014011262 en-ner-money.bin Span: [14..16) money Entity: 2.45 Probability: 0.7200919701507937 en-ner-organization.bin Span: [16..17) organization Entity: IBM Probability: 0.9256970736336729 en-ner-time.bin The model was not able to detect time in this text sequence The model failed to find the time entities in the sample text. This illustrates that the model did not have enough confidence that it found any time entities in the text. Processing multiple entities types We can also handle multiple entity types at the same time. This involves creating instances of NameFinderME, based on each model within a loop and applying the model against each sentence, keeping track of the entities as they are found. We will illustrate this process with the next example. It requires rewriting the previous try block to create the InputStream within the block as shown here:    try {        InputStream tokenStream = new FileInputStream(            new File(getModelDir(), "en-token.bin"));        TokenizerModel tokenModel = new TokenizerModel(tokenStream);        Tokenizer tokenizer = new TokenizerME(tokenModel);        ...    } catch (Exception ex) {        // Handle exceptions    } Within the try block, we will define a string array to hold the names of the model files. As shown here, we will use models for people, locations, and organizations:    String modelNames[] = {"en-ner-person.bin",        "en-ner-location.bin", "en-ner-organization.bin"}; ArrayList is created to hold the entities as they are discovered:    ArrayList<String> list = new ArrayList(); A for-each statement is used to load one model at a time and then to create an instance of the NameFinderME class:    for(String name : modelNames) {        TokenNameFinderModel entityModel = new TokenNameFinderModel(            new FileInputStream(new File(getModelDir(), name)));        NameFinderME nameFinder = new NameFinderME(entityModel);        ...    } Previously, we did not try to identify which sentences the entities were found in. This is not hard to do, but we need to use a simple for statement instead of a for-each statement to keep track of the sentence indexes. This is shown next where the previous example has been modified to use the integer variable index to keep the sentences. Otherwise, the code works the same way as before:    for (int index = 0; index < sentences.length; index++) {        String tokens[] = tokenizer.tokenize(sentences[index]);        Span nameSpans[] = nameFinder.find(tokens);        for(Span span : nameSpans) {            list.add("Sentence: " + index                + " Span: " + span.toString() + " Entity: "                + tokens[span.getStart()]);        }    } The entities discovered are then displayed:    for(String element : list) {        System.out.println(element);    } The output is as follows: Sentence: 0 Span: [0..1) person Entity: Joe Sentence: 0 Span: [7..9) person Entity: Fred Sentence: 2 Span: [0..1) person Entity: Joe Sentence: 2 Span: [19..20) person Entity: Sally Sentence: 2 Span: [26..27) person Entity: Fred Sentence: 1 Span: [4..5) location Entity: Boston Sentence: 2 Span: [5..6) location Entity: Vermont Sentence: 2 Span: [16..17) organization Entity: IBM Using the Stanford API for NER We will demonstrate the CRFClassifier class as used to perform NER. This class implements what is known as a linear chain Conditional Random Field (CRF) sequence model. To demonstrate the use of the CRFClassifier class we will start with a declaration of the classifier file string as shown here:    String model = getModelDir() +        "\english.conll.4class.distsim.crf.ser.gz"; The classifier is then created using the model:    CRFClassifier<CoreLabel> classifier =        CRFClassifier.getClassifierNoExceptions(model); The classify method takes a single string representing the text to be processed. To use the sentences text we need to convert it to a simple string:    String sentence = "";    for (String element : sentences) {        sentence += element;    } The classify method is then applied to the text:    List<List<CoreLabel>> entityList = classifier.classify(sentence); A List of CoreLabel is returned. The object returned is a list that contains another list. The contained list is a list of CoreLabel. The CoreLabel class represents a word with additional information attached to it. The internal list contains a list of these words. In the outer for-each statement in the next code sequence, the internalList variable represents one sentence of the text. In the inner for-each statement, each word in that inner list is displayed. The word method returns the word and the get method returns the type of the word. The words and their types are then displayed:    for (List<CoreLabel> internalList: entityList) {        for (CoreLabel coreLabel : internalList) {            String word = coreLabel.word();            String category = coreLabel.get(                CoreAnnotations.AnswerAnnotation.class);            System.out.println(word + ":" + category);      }    } Part of the output is as follows. It has been truncated because every word is displayed. The O represents the Other category: Joe:PERSON was:O the:O last:O person:O to:O see:O Fred:PERSON .:O He:O ... look:O for:O Fred:PERSON To filter out those words that are not relevant, replace the println statement with the following statements. This will eliminate the other categories:    if (!"O".equals(category)) {        System.out.println(word + ":" + category);    } The output is simpler now: Joe:PERSON Fred:PERSON Boston:LOCATION McKenzie:PERSON Joe:PERSON Vermont:LOCATION IBM:ORGANIZATION Sally:PERSON Fred:PERSON Using LingPipe for NER Here we will demonstrate how name entity models and the ExactDictionaryChunker class are used to perform NER analysis. Using LingPipe's name entity models LingPipe has a few named entity models that we can use with chunking. These files consist of a serialized object that can be read from a file and then applied to text. These objects implement the Chunker interface. The chunking process results in a series of Chunking objects that identify the entities of interest. A list of the NER models is in found in the following table. These models can be downloaded from http://alias-i.com/lingpipe/web/models.html. Genre Corpus File English News MUC-6 ne-en-news-muc6.AbstractCharLmRescoringChunker English Genes GeneTag ne-en-bio-genetag.HmmChunker English Genomics GENIA ne-en-bio-genia.TokenShapeChunker We will use the model found in the file, ne-en-news-muc6.AbstractCharLmRescoringChunker, to demonstrate how this class is used. We start with a try-catch block to deal with exceptions as shown next. The file is opened and used with the AbstractExternalizable class' static readObject method to create an instance of a Chunker class. This method will read in the serialized model:    try {        File modelFile = new File(getModelDir(),            "ne-en-news-muc6.AbstractCharLmRescoringChunker");          Chunker chunker = (Chunker)            AbstractExternalizable.readObject(modelFile);        ...    } catch (IOException | ClassNotFoundException ex) {        // Handle exception    } The Chunker and Chunking interfaces provide methods that work with a set of chunks of text. Its chunk method returns an object that implements the Chunking instance. The following sequence displays the chunks found in each sentence of the text as shown here:    for (int i = 0; i < sentences.length; ++i) {        Chunking chunking = chunker.chunk(sentences[i]);        System.out.println("Chunking=" + chunking);    } The output of this sequence is as follows: Chunking=Joe was the last person to see Fred. : [0-3:PERSON@-Infinity, 31-35:ORGANIZATION@-Infinity] Chunking=He saw him in Boston at McKenzie's pub at 3:00 where he paid $2.45 for an ale. : [14-20:LOCATION@-Infinity, 24-32:PERSON@-Infinity] Chunking=Joe wanted to go to Vermont for the day to visit a cousin who works at IBM, but Sally and he had to look for Fred : [0-3:PERSON@-Infinity, 20-27:ORGANIZATION@-Infinity, 71-74:ORGANIZATION@-Infinity, 109-113:ORGANIZATION@-Infinity] Instead, we can use methods of the Chunk class to extract specific pieces of information as illustrated next. We will replace the previous for statement with the following for-each statement. This calls a displayChunkSet method:    for (String sentence : sentences) {        displayChunkSet(chunker, sentence);    } The output that follows shows the result. However, it did not always match the entity type correctly: Type: PERSON Entity: [Joe] Score: -Infinity Type: ORGANIZATION Entity: [Fred] Score: -Infinity Type: LOCATION Entity: [Boston] Score: -Infinity Type: PERSON Entity: [McKenzie] Score: -Infinity Type: PERSON Entity: [Joe] Score: -Infinity Type: ORGANIZATION Entity: [Vermont] Score: -Infinity Type: ORGANIZATION Entity: [IBM] Score: -Infinity Type: ORGANIZATION Entity: [Fred] Score: -Infinity Using the ExactDictionaryChunker class The ExactDictionaryChunker class provides an easy way to create a dictionary of entities and their types, which can be used to find them later in text. It uses a MapDictionary object to store entries and then the ExactDictionaryChunker class is used to extract chunks based on the dictionary. The AbstractDictionary interface supports basic operations for entities, category, and score. The score is used in the matching process. The MapDictionary and TrieDictionary classes implement the AbstractDictionary interface. The TrieDictionary class stores information using a character trie structure. This approach uses less memory when that is a concern. We will use the MapDictionary class for our example. To illustrate this approach we start with a declaration of the MapDictionary:    private MapDictionary<String> dictionary; The dictionary will contain the entities that we are interested in finding. We need to initialize the model as performed in the following initializeDictionary method. The DictionaryEntry constructor used here accepts three arguments: String: It gives the name of the entity String: It gives the category of the entity Double: It represents a score for the entity The score is used when determining matches. A few entities are declared and added to the dictionary:    private static void initializeDictionary() {         dictionary = new MapDictionary<String>();        dictionary.addEntry(            new DictionaryEntry<String>("Joe","PERSON",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("Fred","PERSON",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("Boston","PLACE",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("pub","PLACE",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("Vermont","PLACE",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("IBM","ORGANIZATION",1.0));        dictionary.addEntry(            new DictionaryEntry<String>("Sally","PERSON",1.0));    } An ExactDictionaryChunker instance will use this dictionary. The arguments of the ExactDictionaryChunker class are detailed here: Dictionary<String>: It is a dictionary containing the entities TokenizerFactory: It is a tokenizer used by the chunker boolean: If true, the chunker should return all matches boolean: If true, the matches are case sensitive Matches can be overlapping. For example, in the phrase, "The First National Bank", the entity "bank" could be used by itself or in conjunction with the rest of the phrase. The third parameter determines if all of the matches are returned. In the following sequence, the dictionary is initialized. We then create an instance of the ExactDictionaryChunker class using the Indo-European tokenizer where we return all matches and ignore the case of the tokens:    initializeDictionary();    ExactDictionaryChunker dictionaryChunker        = new ExactDictionaryChunker(dictionary,            IndoEuropeanTokenizerFactory.INSTANCE, true, false); The dictionaryChunker object is used with each sentence as shown next. We will use displayChunkSet:    for (String sentence : sentences) {        System.out.println("nTEXT=" + sentence);        displayChunkSet(dictionaryChunker, sentence);    } When executed, we get the following output: TEXT=Joe was the last person to see Fred. Type: PERSON Entity: [Joe] Score: 1.0 Type: PERSON Entity: [Fred] Score: 1.0   TEXT=He saw him in Boston at McKenzie's pub at 3:00 where he paid $2.45 for an ale. Type: PLACE Entity: [Boston] Score: 1.0 Type: PLACE Entity: [pub] Score: 1.0   TEXT=Joe wanted to go to Vermont for the day to visit a cousin who works at IBM, but Sally and he had to look for Fred Type: PERSON Entity: [Joe] Score: 1.0 Type: PLACE Entity: [Vermont] Score: 1.0 Type: ORGANIZATION Entity: [IBM] Score: 1.0 Type: PERSON Entity: [Sally] Score: 1.0 Type: PERSON Entity: [Fred] Score: 1.0 This does a pretty good job, but it requires a lot of effort to create the dictionary or a large vocabulary. Training a model We will use the OpenNLP to demonstrate how a model is trained. The training file used must have the following: Marks to demarcate the entities One sentence per line We will use the following model file named en-ner-person.train: <START:person> Joe <END> was the last person to see <START:person> Fred <END>. He saw him in Boston at McKenzie's pub at 3:00 where he paid $2.45 for an ale. <START:person> Joe <END> wanted to go to Vermont for the day to visit a cousin who works at IBM, but <START:person> Sally <END> and he had to look for <START:person> Fred <END>. Several methods of this example are capable of throwing exceptions. These statements will be placed in try-with-resource block as shown here where the model's output stream is created:    try (OutputStream modelOutputStream = new BufferedOutputStream(            new FileOutputStream(new File("modelFile")));) {        ...    } catch (IOException ex) {        // Handle exception    } Within the block, we create an OutputStream<String> object using the PlainTextByLineStream class. This class' constructor takes FileInputStream and returns each line as a String object. The en-ner-person.train file is used as the input file as shown here. UTF-8 refers to the encoding sequence used:    ObjectStream<String> lineStream = new PlainTextByLineStream(        new FileInputStream("en-ner-person.train"), "UTF-8"); The lineStream object contains stream that are annotated with tags delineating the entities in the text. These need to be converted to the NameSample objects so that the model can be trained. This conversion is performed by the NameSampleDataStream class as shown next. A NameSample object holds the names for the entities found in the text:    ObjectStream<NameSample> sampleStream =        new NameSampleDataStream(lineStream); The train method can now be executed as shown next:    TokenNameFinderModel model = NameFinderME.train(        "en", "person", sampleStream,        Collections.<String, Object>emptyMap(), 100, 5); The arguments of the method are as detailed in the following table. Parameter Meaning "en" Language Code "person" Entity type sampleStream Sample data null Resources 100 The number of iterations 5 The cutoff The model is then serialized to the file:    model.serialize(modelOutputStream); The output of this sequence is as follows. It has been shortened to conserve space. Basic information about the model creation is detailed: Indexing events using cutoff of 5      Computing event counts... done. 53 events    Indexing... done. Sorting and merging events... done. Reduced 53 events to 46. Done indexing. Incorporating indexed data for training...    Number of Event Tokens: 46    Number of Outcomes: 2     Number of Predicates: 34 ...done. Computing model parameters ... Performing 100 iterations. 1: ... loglikelihood=-36.73680056967707 0.05660377358490566 2: ... loglikelihood=-17.499660626361216 0.9433962264150944 3: ... loglikelihood=-13.216835449617108 0.9433962264150944 4: ... loglikelihood=-11.461783667999262 0.9433962264150944 5: ... loglikelihood=-10.380239416084963 0.9433962264150944 6: ... loglikelihood=-9.570622475692486 0.9433962264150944 7: ... loglikelihood=-8.919945779143012 0.9433962264150944 ... 99: ... loglikelihood=-3.513810438211968 0.9622641509433962 100: ... loglikelihood=-3.507213816708068 0.9622641509433962 Evaluating the model The model can be evaluated using the TokenNameFinderEvaluator class. The evaluation process uses marked up sample text to perform the evaluation. For this simple example, a file called en-ner-person.eval was created that contained the following text: <START:person> Bill <END> went to the farm to see <START:person> Sally <END>. Unable to find <START:person> Sally <END> he went to town. There he saw <START:person> Fred <END> who had seen <START:person> Sally <END> at the book store with <START:person> Mary <END>. The following code is used to perform the evaluation. The previous model is used as the argument of the TokenNameFinderEvaluator constructor. A NameSampleDataStream instance is created based on the evaluation file. The TokenNameFinderEvaluator class' evaluate method performs the evaluation:    TokenNameFinderEvaluator evaluator =        new TokenNameFinderEvaluator(new NameFinderME(model));      lineStream = new PlainTextByLineStream(        new FileInputStream("en-ner-person.eval"), "UTF-8");    sampleStream = new NameSampleDataStream(lineStream);    evaluator.evaluate(sampleStream); To determine how well the model worked with the evaluation data, the getFMeasure method is executed. The results are then displayed:    FMeasure result = evaluator.getFMeasure();    System.out.println(result.toString()); The following output displays the precision, recall, and F-Measure. It indicates that 50 percent of the entities found exactly match the evaluation data. The recall is the percentage of entities defined in the corpus that were found in the same location. The performance measure is the harmonic mean, defined as F1 = 2 * Precision * Recall / (Recall + Precision): Precision: 0.5 Recall: 0.25 F-Measure: 0.3333333333333333 The data and evaluation sets should be much larger to create a better model. The intent here was to demonstrate the basic approach used to train and evaluate a POS model. Summary The NER involves detecting entities and then classifying them. Common categories include names, locations, and things. This is an important task that many applications use to support searching, resolving references, and finding meaning in text. The process is frequently used in downstream tasks. We investigated several techniques for performing NER. Regular expression is one approach that is supported both by core Java classes and NLP APIs. This technique is useful for many applications and there are a large number of regular expression libraries available. Dictionary-based approaches are also possible and work well for some applications. However, they require considerable effort to populate at times. We used LingPipe's MapDictionary class to illustrate this approach. Trained models can also be used to perform NER. We examine several of these and demonstrated how to train a model using the Open NLP NameFinderME class.
Read more
  • 0
  • 0
  • 893
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-hive-hadoop
Packt
10 Feb 2015
36 min read
Save for later

Hive in Hadoop

Packt
10 Feb 2015
36 min read
In this article by Garry Turkington and Gabriele Modena, the author of the book Learning Hadoop 2. explain how MapReduce is a powerful paradigm that enables complex data processing that can reveal valuable insights. It does require a different mindset and some training and experience on the model of breaking processing analytics into a series of map and reduce steps. There are several products that are built atop Hadoop to provide higher-level or more familiar views of the data held within HDFS, and Pig is a very popular one. This article will explore the other most common abstraction implemented atop Hadoop: SQL. In this article, we will cover the following topics: What the use cases for SQL on Hadoop are and why it is so popular HiveQL, the SQL dialect introduced by Apache Hive Using HiveQL to perform SQL-like analysis of the Twitter dataset How HiveQL can approximate common features of relational databases such as joins and views (For more resources related to this topic, see here.) Why SQL on Hadoop So far we have seen how to write Hadoop programs using the MapReduce APIs and how Pig Latin provides a scripting abstraction and a wrapper for custom business logic by means of UDFs. Pig is a very powerful tool, but its dataflow-based programming model is not familiar to most developers or business analysts. The traditional tool of choice for such people to explore data is SQL. Back in 2008 Facebook released Hive, the first widely used implementation of SQL on Hadoop. Instead of providing a way of more quickly developing map and reduce tasks, Hive offers an implementation of HiveQL, a query language based on SQL. Hive takes HiveQL statements and immediately and automatically translates the queries into one or more MapReduce jobs. It then executes the overall MapReduce program and returns the results to the user. This interface to Hadoop not only reduces the time required to produce results from data analysis, it also significantly widens the net as to who can use Hadoop. Instead of requiring software development skills, anyone who's familiar with SQL can use Hive. The combination of these attributes is that HiveQL is often used as a tool for business and data analysts to perform ad hoc queries on the data stored on HDFS. With Hive, the data analyst can work on refining queries without the involvement of a software developer. Just as with Pig, Hive also allows HiveQL to be extended by means of User Defined Functions, enabling the base SQL dialect to be customized with business-specific functionality. Other SQL-on-Hadoop solutions Though Hive was the first product to introduce and support HiveQL, it is no longer the only one. There are others, but we will mostly discuss Hive and Impala as they have been the most successful. While introducing the core features and capabilities of SQL on Hadoop however, we will give examples using Hive; even though Hive and Impala share many SQL features, they also have numerous differences. We don't want to constantly have to caveat each new feature with exactly how it is supported in Hive compared to Impala. We'll generally be looking at aspects of the feature set that are common to both, but if you use both products, it's important to read the latest release notes to understand the differences. Prerequisites Before diving into specific technologies, let's generate some data that we'll use in the examples throughout this article. We'll create a modified version of a former Pig script as the main functionality for this. The script in this article assumes that the Elephant Bird JARs used previously are available in the /jar directory on HDFS. The full source code is at https://github.com/learninghadoop2/book-examples/ch7/extract_for_hive.pig, but the core of extract_for_hive.pig is as follows: -- load JSON data tweets = load '$inputDir' using com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad'); -- Tweets tweets_tsv = foreach tweets { generate    (chararray)CustomFormatToISO($0#'created_at', 'EEE MMMM d HH:mm:ss Z y') as dt,    (chararray)$0#'id_str', (chararray)$0#'text' as text,    (chararray)$0#'in_reply_to', (boolean)$0#'retweeted' as is_retweeted, (chararray)$0#'user'#'id_str' as user_id, (chararray)$0#'place'#'id' as place_id; } store tweets_tsv into '$outputDir/tweets' using PigStorage('u0001'); -- Places needed_fields = foreach tweets {    generate (chararray)CustomFormatToISO($0#'created_at', 'EEE MMMM d HH:mm:ss Z y') as dt,      (chararray)$0#'id_str' as id_str, $0#'place' as place; } place_fields = foreach needed_fields { generate    (chararray)place#'id' as place_id,    (chararray)place#'country_code' as co,    (chararray)place#'country' as country,    (chararray)place#'name' as place_name,    (chararray)place#'full_name' as place_full_name,    (chararray)place#'place_type' as place_type; } filtered_places = filter place_fields by co != ''; unique_places = distinct filtered_places; store unique_places into '$outputDir/places' using PigStorage('u0001');   -- Users users = foreach tweets {    generate (chararray)CustomFormatToISO($0#'created_at', 'EEE MMMM d HH:mm:ss Z y') as dt, (chararray)$0#'id_str' as id_str, $0#'user' as user; } user_fields = foreach users {    generate    (chararray)CustomFormatToISO(user#'created_at', 'EEE MMMM d HH:mm:ss Z y') as dt, (chararray)user#'id_str' as user_id, (chararray)user#'location' as user_location, (chararray)user#'name' as user_name, (chararray)user#'description' as user_description, (int)user#'followers_count' as followers_count, (int)user#'friends_count' as friends_count, (int)user#'favourites_count' as favourites_count, (chararray)user#'screen_name' as screen_name, (int)user#'listed_count' as listed_count;   } unique_users = distinct user_fields; store unique_users into '$outputDir/users' using PigStorage('u0001'); Run this script as follows: $ pig –f extract_for_hive.pig –param inputDir=<json input> -param outputDir=<output path> The preceding code writes data into three separate TSV files for the tweet, user, and place information. Notice that in the store command, we pass an argument when calling PigStorage. This single argument changes the default field separator from a tab character to unicode value U0001, or you can also use Ctrl +C + A. This is often used as a separator in Hive tables and will be particularly useful to us as our tweet data could contain tabs in other fields. Overview of Hive We will now show how you can import data into Hive and run a query against the table abstraction Hive provides over the data. In this example, and in the remainder of the article, we will assume that queries are typed into the shell that can be invoked by executing the hive command. Recently a client called Beeline also became available and will likely be the preferred CLI client in the near future. When importing any new data into Hive, there is generally a three-stage process: Create the specification of the table into which the data is to be imported Import the data into the created table Execute HiveQL queries against the table Most of the HiveQL statements are direct analogues to similarly named statements in standard SQL. We assume only a passing knowledge of SQL throughout this article, but if you need a refresher, there are numerous good online learning resources. Hive gives a structured query view of our data, and to enable that, we must first define the specification of the table's columns and import the data into the table before we can execute any queries. A table specification is generated using a CREATE statement that specifies the table name, the name and types of its columns, and some metadata about how the table is stored: CREATE table tweets ( created_at string, tweet_id string, text string, in_reply_to string, retweeted boolean, user_id string, place_id string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE; The statement creates a new table tweets defined by a list of names for columns in the dataset and their data type. We specify that fields are delimited by the Unicode U0001 character and that the format used to store data is TEXTFILE. Data can be imported from a location in HDFS tweets/ into hive using the LOAD DATA statement: LOAD DATA INPATH 'tweets' OVERWRITE INTO TABLE tweets; By default, data for Hive tables is stored on HDFS under /user/hive/warehouse. If a LOAD statement is given a path to data on HDFS, it will not simply copy the data into /user/hive/warehouse, but will move it there instead. If you want to analyze data on HDFS that is used by other applications, then either create a copy or use the EXTERNAL mechanism that will be described later. Once data has been imported into Hive, we can run queries against it. For instance: SELECT COUNT(*) FROM tweets; The preceding code will return the total number of tweets present in the dataset. HiveQL, like SQL, is not case sensitive in terms of keywords, columns, or table names. By convention, SQL statements use uppercase for SQL language keywords, and we will generally follow this when using HiveQL within files, as will be shown later. However, when typing interactive commands, we will frequently take the line of least resistance and use lowercase. If you look closely at the time taken by the various commands in the preceding example, you'll notice that loading data into a table takes about as long as creating the table specification, but even the simple count of all rows takes significantly longer. The output also shows that table creation and the loading of data do not actually cause MapReduce jobs to be executed, which explains the very short execution times. The nature of Hive tables Although Hive copies the data file into its working directory, it does not actually process the input data into rows at that point. Both the CREATE TABLE and LOAD DATA statements do not truly create concrete table data as such; instead, they produce the metadata that will be used when Hive generates MapReduce jobs to access the data conceptually stored in the table but actually residing on HDFS. Even though the HiveQL statements refer to a specific table structure, it is Hive's responsibility to generate code that correctly maps this to the actual on-disk format in which the data files are stored. This might seem to suggest that Hive isn't a real database; this is true, it isn't. Whereas a relational database will require a table schema to be defined before data is ingested and then ingest only data that conforms to that specification, Hive is much more flexible. The less concrete nature of Hive tables means that schemas can be defined based on the data as it has already arrived and not on some assumption of how the data should be, which might prove to be wrong. Though changeable data formats are troublesome regardless of technology, the Hive model provides an additional degree of freedom in handling the problem when, not if, it arises. Hive architecture Until version 2, Hadoop was primarily a batch system. Internally, Hive compiles HiveQL statements into MapReduce jobs. Hive queries have traditionally been characterized by high latency. This has changed with the Stinger initiative and the improvements introduced in Hive 0.13 that we will discuss later. Hive runs as a client application that processes HiveQL queries, converts them into MapReduce jobs, and submits these to a Hadoop cluster either to native MapReduce in Hadoop 1 or to the MapReduce Application Master running on YARN in Hadoop 2. Regardless of the model, Hive uses a component called the metastore, in which it holds all its metadata about the tables defined in the system. Ironically, this is stored in a relational database dedicated to Hive's usage. In the earliest versions of Hive, all clients communicated directly with the metastore, but this meant that every user of the Hive CLI tool needed to know the metastore username and password. HiveServer was created to act as a point of entry for remote clients, which could also act as a single access-control point and which controlled all access to the underlying metastore. Because of limitations in HiveServer, the newest way to access Hive is through the multi-client HiveServer2. HiveServer2 introduces a number of improvements over its predecessor, including user authentication and support for multiple connections from the same client. More information can be found at https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2. Instances of HiveServer and HiveServer2 can be manually executed with the hive --service hiveserver and hive --service hiveserver2 commands, respectively. In the examples we saw before and in the remainder of this article, we implicitly use HiveServer to submit queries via the Hive command-line tool. HiveServer2 comes with Beeline. For compatibility and maturity reasons, Beeline being relatively new, both tools are available on Cloudera and most other major distributions. The Beeline client is part of the core Apache Hive distribution and so is also fully open source. Beeline can be executed in embedded version with the following command: $ beeline -u jdbc:hive2:// Data types HiveQL supports many of the common data types provided by standard database systems. These include primitive types, such as float, double, int, and string, through to structured collection types that provide the SQL analogues to types such as arrays, structs, and unions (structs with options for some fields). Since Hive is implemented in Java, primitive types will behave like their Java counterparts. We can distinguish Hive data types into the following five broad categories: Numeric: tinyint, smallint, int, bigint, float, double, and decimal Date and time: timestamp and date String: string, varchar, and char Collections: array, map, struct, and uniontype Misc: boolean, binary, and NULL DDL statements HiveQL provides a number of statements to create, delete, and alter databases, tables, and views. The CREATE DATABASE <name> statement creates a new database with the given name. A database represents a namespace where table and view metadata is contained. If multiple databases are present, the USE <database name> statement specifies which one to use to query tables or create new metadata. If no database is explicitly specified, Hive will run all statements against the default database. SHOW [DATABASES, TABLES, VIEWS] displays the databases currently available within a data warehouse and which table and view metadata is present within the database currently in use: CREATE DATABASE twitter; SHOW databases; USE twitter; SHOW TABLES; The CREATE TABLE [IF NOT EXISTS] <name> statement creates a table with the given name. As alluded to earlier, what is really created is the metadata representing the table and its mapping to files on HDFS as well as a directory in which to store the data files. If a table or view with the same name already exists, Hive will raise an exception. Both table and column names are case insensitive. In older versions of Hive (0.12 and earlier), only alphanumeric and underscore characters were allowed in table and column names. As of Hive 0.13, the system supports unicode characters in column names. Reserved words, such as load and create, need to be escaped by backticks (the ` character) to be treated literally. The EXTERNAL keyword specifies that the table exists in resources out of Hive's control, which can be a useful mechanism to extract data from another source at the beginning of a Hadoop-based Extract-Transform-Load (ETL) pipeline. The LOCATION clause specifies where the source file (or directory) is to be found. The EXTERNAL keyword and LOCATION clause have been used in the following code: CREATE EXTERNAL TABLE tweets ( created_at string, tweet_id string, text string, in_reply_to string, retweeted boolean, user_id string, place_id string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE LOCATION '${input}/tweets'; This table will be created in metastore, but the data will not be copied into the /user/hive/warehouse directory. Note that Hive has no concept of primary key or unique identifier. Uniqueness and data normalization are aspects to be addressed before loading data into the data warehouse. The CREATE VIEW <view name> … AS SELECT statement creates a view with the given name. For example, we can create a view to isolate retweets from other messages, as follows: CREATE VIEW retweets COMMENT 'Tweets that have been retweeted' AS SELECT * FROM tweets WHERE retweeted = true; Unless otherwise specified, column names are derived from the defining SELECT statement. Hive does not currently support materialized views. The DROP TABLE and DROP VIEW statements remove both metadata and data for a given table or view. When dropping an EXTERNAL table or a view, only metadata will be removed and the actual data files will not be affected. Hive allows table metadata to be altered via the ALTER TABLE statement, which can be used to change a column type, name, position, and comment or to add and replace columns. When adding columns, it is important to remember that only metadata will be changed and not the dataset itself. This means that if we were to add a column in the middle of the table which didn't exist in older files, then while selecting from older data, we might get wrong values in the wrong columns. This is because we would be looking at old files with a new format Similarly, ALTER VIEW <view name> AS <select statement> changes the definition of an existing view. File formats and storage The data files underlying a Hive table are no different from any other file on HDFS. Users can directly read the HDFS files in the Hive tables using other tools. They can also use other tools to write to HDFS files that can be loaded into Hive through CREATE EXTERNAL TABLE or through LOAD DATA INPATH. Hive uses the Serializer and Deserializer classes, SerDe, as well as FileFormat to read and write table rows. A native SerDe is used if ROW FORMAT is not specified or ROW FORMAT DELIMITED is specified in a CREATE TABLE statement. The DELIMITED clause instructs the system to read delimited files. Delimiter characters can be escaped using the ESCAPED BY clause. Hive currently uses the following FileFormat classes to read and write HDFS files: TextInputFormat and HiveIgnoreKeyTextOutputFormat: will read/write data in plain text file format SequenceFileInputFormat and SequenceFileOutputFormat: classes read/write data in the Hadoop SequenceFile format Additionally, the following SerDe classes can be used to serialize and deserialize data: MetadataTypedColumnsetSerDe: This will read/write delimited records such as CSV or tab-separated records ThriftSerDe, and DynamicSerDe: These will read/write Thrift objects JSON As of version 0.13, Hive ships with the native org.apache.hive.hcatalog.data.JsonSerDe JSON SerDe. For older versions of Hive, Hive-JSON-Serde (found at https://github.com/rcongiu/Hive-JSON-Serde) is arguably one of the most feature-rich JSON serialization/deserialization modules. We can use either module to load JSON tweets without any need for preprocessing and just define a Hive schema that matches the content of a JSON document. In the following example, we use Hive-JSON-Serde. As with any third-party module, we load the SerDe JARS into Hive with the following code: ADD JAR JAR json-serde-1.3-jar-with-dependencies.jar; Then, we issue the usual create statement, as follows: CREATE EXTERNAL TABLE tweets (    contributors string,    coordinates struct <      coordinates: array <float>,      type: string>,    created_at string,    entities struct <      hashtags: array <struct <            indices: array <tinyint>,            text: string>>, … ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE LOCATION 'tweets'; With this SerDe, we can map nested documents (such as entities or users) to the struct or map types. We tell Hive that the data stored at LOCATION 'tweets' is text (STORED AS TEXTFILE) and that each row is a JSON object (ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'). In Hive 0.13 and later, we can express this property as ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'. Manually specifying the schema for complex documents can be a tedious and error-prone process. The hive-json module (found at https://github.com/hortonworks/hive-json) is a handy utility to analyze large documents and generate an appropriate Hive schema. Depending on the document collection, further refinement might be necessary. In our example, we used a schema generated with hive-json that maps the tweets JSON to a number of struct data types. This allows us to query the data using a handy dot notation. For instance, we can extract the screen name and description fields of a user object with the following code: SELECT user.screen_name, user.description FROM tweets_json LIMIT 10; Avro AvroSerde (https://cwiki.apache.org/confluence/display/Hive/AvroSerDe) allows us to read and write data in Avro format. Starting from 0.14, Avro-backed tables can be created using the STORED AS AVRO statement, and Hive will take care of creating an appropriate Avro schema for the table. Prior versions of Hive are a bit more verbose. This dataset was created using Pig's AvroStorage class, which generated the following schema: { "type":"record", "name":"record", "fields": [    {"name":"topic","type":["null","int"]},    {"name":"source","type":["null","int"]},    {"name":"rank","type":["null","float"]} ] } The table structure is captured in an Avro record, which contains header information (a name and optional namespace to qualify the name) and an array of the fields. Each field is specified with its name and type as well as an optional documentation string. For a few of the fields, the type is not a single value, but instead a pair of values, one of which is null. This is an Avro union, and this is the idiomatic way of handling columns that might have a null value. Avro specifies null as a concrete type, and any location where another type might have a null value needs to be specified in this way. This will be handled transparently for us when we use the following schema. With this definition, we can now create a Hive table that uses this schema for its table specification, as follows: CREATE EXTERNAL TABLE tweets_pagerank ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' WITH SERDEPROPERTIES ('avro.schema.literal'='{    "type":"record",    "name":"record",    "fields": [        {"name":"topic","type":["null","int"]},        {"name":"source","type":["null","int"]},        {"name":"rank","type":["null","float"]}    ] }') STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' LOCATION '${data}/ch5-pagerank'; Then, look at the following table definition from within Hive (note also that HCatalog): DESCRIBE tweets_pagerank; OK topic                 int                   from deserializer   source               int                   from deserializer   rank                 float                 from deserializer In the DDL, we told Hive that data is stored in Avro format using AvroContainerInputFormat and AvroContainerOutputFormat. Each row needs to be serialized and deserialized using org.apache.hadoop.hive.serde2.avro.AvroSerDe. The table schema is inferred by Hive from the Avro schema embedded in avro.schema.literal. Alternatively, we can store a schema on HDFS and have Hive read it to determine the table structure. Create the preceding schema in a file called pagerank.avsc—this is the standard file extension for Avro schemas. Then place it on HDFS; we prefer to have a common location for schema files such as /schema/avro. Finally, define the table using the avro.schema.url SerDe property WITH SERDEPROPERTIES ('avro.schema.url'='hdfs://<namenode>/schema/avro/pagerank.avsc'). If Avro dependencies are not present in the classpath, we need to add the Avro MapReduce JAR to our environment before accessing individual fields. Within Hive, on the Cloudera CDH5 VM: ADD JAR /opt/cloudera/parcels/CDH/lib/avro/avro-mapred-hadoop2.jar; We can also use this table like any other. For instance, we can query the data to select the user and topic pairs with a high PageRank: SELECT source, topic from tweets_pagerank WHERE rank >= 0.9; Columnar stores Hive can also take advantage of columnar storage via the ORC (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC) and Parquet (https://cwiki.apache.org/confluence/display/Hive/Parquet) formats. If a table is defined with very many columns, it is not unusual for any given query to only process a small subset of these columns. But even in a SequenceFile each full row and all its columns will be read from disk, decompressed, and processed. This consumes a lot of system resources for data that we know in advance is not of interest. Traditional relational databases also store data on a row basis, and a type of database called columnar changed this to be column-focused. In the simplest model, instead of one file for each table, there would be one file for each column in the table. If a query only needed to access five columns in a table with 100 columns in total, then only the files for those five columns will be read. Both ORC and Parquet use this principle as well as other optimizations to enable much faster queries. Queries Tables can be queried using the familiar SELECT … FROM statement. The WHERE statement allows the specification of filtering conditions, GROUP BY aggregates records, ORDER BY specifies sorting criteria, and LIMIT specifies the number of records to retrieve. Aggregate functions, such as count and sum, can be applied to aggregated records. For instance, the following code returns the top 10 most prolific users in the dataset: SELECT user_id, COUNT(*) AS cnt FROM tweets GROUP BY user_id ORDER BY cnt DESC LIMIT 10 The following are the top 10 most prolific users in the dataset: NULL 7091 1332188053 4 959468857 3 1367752118 3 362562944 3 58646041 3 2375296688 3 1468188529 3 37114209 3 2385040940 3 We can improve the readability of the hive output by setting the following: SET hive.cli.print.header=true; This will instruct hive, though not beeline, to print column names as part of the output. You can add the command to the .hiverc file usually found in the root of the executing user's home directory to have it apply to all hive CLI sessions. HiveQL implements a JOIN operator that enables us to combine tables together. In the Prerequisites section, we generated separate datasets for the user and place objects. Let's now load them into hive using external tables. We first create a user table to store user data, as follows: CREATE EXTERNAL TABLE user ( created_at string, user_id string, `location` string, name string, description string, followers_count bigint, friends_count bigint, favourites_count bigint, screen_name string, listed_count bigint ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE LOCATION '${input}/users'; We then create a place table to store location data, as follows: CREATE EXTERNAL TABLE place ( place_id string, country_code string, country string, `name` string, full_name string, place_type string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE LOCATION '${input}/places'; We can use the JOIN operator to display the names of the 10 most prolific users, as follows: SELECT tweets.user_id, user.name, COUNT(tweets.user_id) AS cnt FROM tweets JOIN user ON user.user_id = tweets.user_id GROUP BY tweets.user_id, user.user_id, user.name ORDER BY cnt DESC LIMIT 10; Only equality, outer, and left (semi) joins are supported in Hive. Notice that there might be multiple entries with a given user ID but different values for the followers_count, friends_count, and favourites_count columns. To avoid duplicate entries, we count only user_id from the tweets tables. We can rewrite the previous query as follows: SELECT tweets.user_id, u.name, COUNT(*) AS cnt FROM tweets join (SELECT user_id, name FROM user GROUP BY user_id, name) u ON u.user_id = tweets.user_id GROUP BY tweets.user_id, u.name ORDER BY cnt DESC LIMIT 10; Instead of directly joining the user table, we execute a subquery, as follows: SELECT user_id, name FROM user GROUP BY user_id, name; The subquery extracts unique user IDs and names. Note that Hive has limited support for subqueries, historically only permitting a subquery in the FROM clause of a SELECT statement. Hive 0.13 has added limited support for subqueries within the WHERE clause also. HiveQL is an ever-evolving rich language, a full exposition of which is beyond the scope of this article. A description of its query and ddl capabilities can be found at  https://cwiki.apache.org/confluence/display/Hive/LanguageManual. Structuring Hive tables for given workloads Often Hive isn't used in isolation, instead tables are created with particular workloads in mind or needs invoked in ways that are suitable for inclusion in automated processes. We'll now explore some of these scenarios. Partitioning a table With columnar file formats, we explained the benefits of excluding unneeded data as early as possible when processing a query. A similar concept has been used in SQL for some time: table partitioning. When creating a partitioned table, a column is specified as the partition key. All values with that key are then stored together. In Hive's case, different subdirectories for each partition key are created under the table directory in the warehouse location on HDFS. It's important to understand the cardinality of the partition column. With too few distinct values, the benefits are reduced as the files are still very large. If there are too many values, then queries might need a large number of files to be scanned to access all the required data. Perhaps the most common partition key is one based on date. We could, for example, partition our user table from earlier based on the created_at column, that is, the date the user was first registered. Note that since partitioning a table by definition affects its file structure, we create this table now as a non-external one, as follows: CREATE TABLE partitioned_user ( created_at string, user_id string, `location` string, name string, description string, followers_count bigint, friends_count bigint, favourites_count bigint, screen_name string, listed_count bigint ) PARTITIONED BY (created_at_date string) ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE; To load data into a partition, we can explicitly give a value for the partition into which to insert the data, as follows: INSERT INTO TABLE partitioned_user PARTITION( created_at_date = '2014-01-01') SELECT created_at, user_id, location, name, description, followers_count, friends_count, favourites_count, screen_name, listed_count FROM user; This is at best verbose, as we need a statement for each partition key value; if a single LOAD or INSERT statement contains data for multiple partitions, it just won't work. Hive also has a feature called dynamic partitioning, which can help us here. We set the following three variables: SET hive.exec.dynamic.partition = true; SET hive.exec.dynamic.partition.mode = nonstrict; SET hive.exec.max.dynamic.partitions.pernode=5000; The first two statements enable all partitions (nonstrict option) to be dynamic. The third one allows 5,000 distinct partitions to be created on each mapper and reducer node. We can then simply use the name of the column to be used as the partition key, and Hive will insert data into partitions depending on the value of the key for a given row: INSERT INTO TABLE partitioned_user PARTITION( created_at_date ) SELECT created_at, user_id, location, name, description, followers_count, friends_count, favourites_count, screen_name, listed_count, to_date(created_at) as created_at_date FROM user; Even though we use only a single partition column here, we can partition a table by multiple column keys; just have them as a comma-separated list in the PARTITIONED BY clause. Note that the partition key columns need to be included as the last columns in any statement being used to insert into a partitioned table. In the preceding code we use Hive's to_date function to convert the created_at timestamp to a YYYY-MM-DD formatted string. Partitioned data is stored in HDFS as /path/to/warehouse/<database>/<table>/key=<value>. In our example, the partitioned_user table structure will look like /user/hive/warehouse/default/partitioned_user/created_at=2014-04-01. If data is added directly to the filesystem, for instance by some third-party processing tool or by hadoop fs -put, the metastore won't automatically detect the new partitions. The user will need to manually run an ALTER TABLE statement such as the following for each newly added partition: ALTER TABLE <table_name> ADD PARTITION <location>; To add metadata for all partitions not currently present in the metastore we can use: MSCK REPAIR TABLE <table_name>; statement. On EMR, this is equivalent to executing the following statement: ALTER TABLE <table_name> RECOVER PARTITIONS; Notice that both statements will work also with EXTERNAL tables. Overwriting and updating data Partitioning is also useful when we need to update a portion of a table. Normally a statement of the following form will replace all the data for the destination table: INSERT OVERWRITE INTO <table>… If OVERWRITE is omitted, then each INSERT statement will add additional data to the table. Sometimes, this is desirable, but often, the source data being ingested into a Hive table is intended to fully update a subset of the data and keep the rest untouched. If we perform an INSERT OVERWRITE statement (or a LOAD OVERWRITE statement) into a partition of a table, then only the specified partition will be affected. Thus, if we were inserting user data and only wanted to affect the partitions with data in the source file, we could achieve this by adding the OVERWRITE keyword to our previous INSERT statement. We can also add caveats to the SELECT statement. Say, for example, we only wanted to update data for a certain month: INSERT INTO TABLE partitioned_user PARTITION (created_at_date) SELECT created_at , user_id, location, name, description, followers_count, friends_count, favourites_count, screen_name, listed_count, to_date(created_at) as created_at_date FROM user WHERE to_date(created_at) BETWEEN '2014-03-01' and '2014-03-31'; Bucketing and sorting Partitioning a table is a construct that you take explicit advantage of by using the partition column (or columns) in the WHERE clause of queries against the tables. There is another mechanism called bucketing that can further segment how a table is stored and does so in a way that allows Hive itself to optimize its internal query plans to take advantage of the structure. Let's create bucketed versions of our tweets and user tables; note the following additional CLUSTER BY and SORT BY statements in the CREATE TABLE statements: CREATE table bucketed_tweets ( tweet_id string, text string, in_reply_to string, retweeted boolean, user_id string, place_id string ) PARTITIONED BY (created_at string) CLUSTERED BY(user_ID) into 64 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE;   CREATE TABLE bucketed_user ( user_id string, `location` string, name string, description string, followers_count bigint, friends_count bigint, favourites_count bigint, screen_name string, listed_count bigint ) PARTITIONED BY (created_at string) CLUSTERED BY(user_ID) SORTED BY(name) into 64 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY 'u0001' STORED AS TEXTFILE; Note that we changed the tweets table to also be partitioned; you can only bucket a table that is partitioned. Just as we need to specify a partition column when inserting into a partitioned table, we must also take care to ensure that data inserted into a bucketed table is correctly clustered. We do this by setting the following flag before inserting the data into the table: SET hive.enforce.bucketing=true; Just as with partitioned tables, you cannot apply the bucketing function when using the LOAD DATA statement; if you wish to load external data into a bucketed table, first insert it into a temporary table, and then use the INSERT…SELECT… syntax to populate the bucketed table. When data is inserted into a bucketed table, rows are allocated to a bucket based on the result of a hash function applied to the column specified in the CLUSTERED BY clause. One of the greatest advantages of bucketing a table comes when we need to join two tables that are similarly bucketed, as in the previous example. So, for example, any query of the following form would be vastly improved: SET hive.optimize.bucketmapjoin=true; SELECT … FROM bucketed_user u JOIN bucketed_tweet t ON u.user_id = t.user_id; With the join being performed on the column used to bucket the table, Hive can optimize the amount of processing as it knows that each bucket contains the same set of user_id columns in both tables. While determining which rows against which to match, only those in the bucket need to be compared against, and not the whole table. This does require that the tables are both clustered on the same column and that the bucket numbers are either identical or one is a multiple of the other. In the latter case, with say one table clustered into 32 buckets and another into 64, the nature of the default hash function used to allocate data to a bucket means that the IDs in bucket 3 in the first table will cover those in both buckets 3 and 35 in the second. Sampling data Bucketing a table can also help while using Hive's ability to sample data in a table. Sampling allows a query to gather only a specified subset of the overall rows in the table. This is useful when you have an extremely large table with moderately consistent data patterns. In such a case, applying a query to a small fraction of the data will be much faster and will still give a broadly representative result. Note, of course, that this only applies to queries where you are looking to determine table characteristics, such as pattern ranges in the data; if you are trying to count anything, then the result needs to be scaled to the full table size. For a non-bucketed table, you can sample in a mechanism similar to what we saw earlier by specifying that the query should only be applied to a certain subset of the table: SELECT max(friends_count) FROM user TABLESAMPLE(BUCKET 2 OUT OF 64 ON name); In this query, Hive will effectively hash the rows in the table into 64 buckets based on the name column. It will then only use the second bucket for the query. Multiple buckets can be specified, and if RAND() is given as the ON clause, then the entire row is used by the bucketing function. Though successful, this is highly inefficient as the full table needs to be scanned to generate the required subset of data. If we sample on a bucketed table and ensure the number of buckets sampled is equal to or a multiple of the buckets in the table, then Hive will only read the buckets in question. For example: SELECT MAX(friends_count) FROM bucketed_user TABLESAMPLE(BUCKET 2 OUT OF 32 on user_id); In the preceding query against the bucketed_user table, which is created with 64 buckets on the user_id column, the sampling, since it is using the same column, will only read the required buckets. In this case, these will be buckets 2 and 34 from each partition. A final form of sampling is block sampling. In this case, we can specify the required amount of the table to be sampled, and Hive will use an approximation of this by only reading enough source data blocks on HDFS to meet the required size. Currently, the data size can be specified as either a percentage of the table, as an absolute data size, or as a number of rows (in each block). The syntax for TABLESAMPLE is as follows, which will sample 0.5 percent of the table, 1 GB of data or 100 rows per split, respectively: TABLESAMPLE(0.5 PERCENT) TABLESAMPLE(1G) TABLESAMPLE(100 ROWS) If these latter forms of sampling are of interest, then consult the documentation, as there are some specific limitations on the input format and file formats that are supported. Writing scripts We can place Hive commands in a file and run them with the -f option in the hive CLI utility: $ cat show_tables.hql show tables; $ hive -f show_tables.hql We can parameterize HiveQL statements by means of the hiveconf mechanism. This allows us to specify an environment variable name at the point it is used rather than at the point of invocation. For example: $ cat show_tables2.hql show tables like '${hiveconf:TABLENAME}'; $ hive -hiveconf TABLENAME=user -f show_tables2.hql The variable can also be set within the Hive script or an interactive session: SET TABLE_NAME='user'; The preceding hiveconf argument will add any new variables in the same namespace as the Hive configuration options. As of Hive 0.8, there is a similar option called hivevar that adds any user variables into a distinct namespace. Using hivevar, the preceding command would be as follows: $ cat show_tables3.hql show tables like '${hivevar:TABLENAME}'; $ hive -hivevar TABLENAME=user –f show_tables3.hql Or we can write the command interactively: SET hivevar_TABLE_NAME='user'; Summary In this article, we learned that in its early days, Hadoop was sometimes erroneously seen as the latest supposed relational database killer. Over time, it has become more apparent that the more sensible approach is to view it as a complement to RDBMS technologies and that, in fact, the RDBMS community has developed tools such as SQL that are also valuable in the Hadoop world. HiveQL is an implementation of SQL on Hadoop and was the primary focus of this article. In regard to HiveQL and its implementations, we covered the following topics: How HiveQL provides a logical model atop data stored in HDFS in contrast to relational databases where the table structure is enforced in advance How HiveQL offers the ability to extend its core set of operators with user-defined code and how this contrasts to the Pig UDF mechanism The recent history of Hive developments, such as the Stinger initiative, that have seen Hive transition to an updated implementation that uses Tez Resources for Article: Further resources on this subject: Big Data Analysis [Article] Understanding MapReduce [Article] Amazon DynamoDB - Modelling relationships, Error handling [Article]
Read more
  • 0
  • 0
  • 3839

article-image-qlik-senses-vision
Packt
06 Feb 2015
12 min read
Save for later

Qlik Sense's Vision

Packt
06 Feb 2015
12 min read
In this article by Christopher Ilacqua, Henric Cronström, and James Richardson, authors of the book Learning Qlik® Sense, we will look at the evolving requirements that compel organizations to readdress how they deliver business intelligence and support data-driven decision-making. This is important as it supplies some of the reasons as to why Qlik® Sense is relevant and important to their success. The purpose of covering these factors is so that you can consider and plan for them in your organization. Among other things, in this article, we will cover the following topics: The ongoing data explosion The rise of in-memory processing Barrierless BI through Human-Computer Interaction The consumerization of BI and the rise of self-service The use of information as an asset The changing role of IT (For more resources related to this topic, see here.) Evolving market factors Technologies are developed and evolved in response to the needs of the environment they are created and used within. The most successful new technologies anticipate upcoming changes in order to help people take advantage of altered circumstances or reimagine how things are done. Any market is defined by both the suppliers—in this case, Qlik®—and the buyers, that is, the people who want to get more use and value from their information. Buyers' wants and needs are driven by a variety of macro and micro factors, and these are always in flux in some markets more than others. This is obviously and apparently the case in the world of data, BI, and analytics, which has been changing at a great pace due to a number of factors discussed further in the rest of this article. Qlik Sense has been designed to be the means through which organizations and the people that are a part of them thrive in a changed environment. Big, big, and even bigger data A key factor is that there's simply much more data in many forms to analyze than before. We're in the middle of an ongoing, accelerating data boom. According to Science Daily, 90 percent of the world's data was generated over the past two years. The fact is that with technologies such as Hadoop and NoSQL databases, we now have unprecedented access to cost-effective data storage. With vast amounts of data now storable and available for analysis, people need a way to sort the signal from the noise. People from a wider variety of roles—not all of them BI users or business analysts—are demanding better, greater access to data, regardless of where it comes from. Qlik Sense's fundamental design centers on bringing varied data together for exploration in an easy and powerful way. The slow spinning down of the disk At the same time, we are seeing a shift in how computation occurs and potentially, how information is managed. Fundamentals of the computing architectures that we've used for decades, the spinning disk and moving read head, are becoming outmoded. This means storing and accessing data has been around since Edison invented the cylinder phonograph in 1877. It's about time this changed. This technology has served us very well; it was elegant and reliable, but it has limitations. Speed limitations primarily. Fundamentals that we take for granted today in BI, such as relational and multidimensional storage models, were built around these limitations. So were our IT skills, whether we realized it at the time. With the use of in-memory processing and 64-bit addressable memory spaces, these limitations are gone! This means a complete change in how we think about analysis. Processing data in memory means we can do analysis that was impractical or impossible before with the old approach. With in-memory computing, analysis that would've taken days before, now takes just seconds (or much less). However, why does it matter? Because it allows us to use the time more effectively; after all, time is the most finite resource of all. In-memory computing enables us to ask more questions, test more scenarios, do more experiments, debunk more hypotheses, explore more data, and run more simulations in the short window available to us. For IT, it means no longer trying to second-guess what users will do months or years in advance and trying to premodel it in order to achieve acceptable response times. People hate watching the hourglass spin. Qlik Sense's predecessor QlikView® was built on the exploitation of in-memory processing; Qlik Sense has it at its core too. Ubiquitous computing and the Internet of Things You may know that more than a billion people use Facebook, but did you know that the majority of those people do so from a mobile device? The growth in the number of devices connected to the Internet is absolutely astonishing. According to Cisco's Zettabyte Era report, Internet traffic from wireless devices will exceed traffic from wired devices in 2014. If we were writing this article even as recently as a year ago, we'd probably be talking about mobile BI as a separate thing from desktop or laptop delivered analytics. The fact of the matter is that we've quickly gone beyond that. For many people now, the most common way to use technology is on a mobile device, and they expect the kind of experience they've become used to on their iOS or Android device to be mirrored in complex software, such as the technology they use for visual discovery and analytics. From its inception, Qlik Sense has had mobile usage in the center of its design ethos. It's the first data discovery software to be built for mobiles, and that's evident in how it uses HTML5 to automatically render output for the device being used, whatever it is. Plug in a laptop running Qlik Sense to a 70-inch OLED TV and the visual output is resized and re-expressed to optimize the new form factor. So mobile is the new normal. This may be astonishing but it's just the beginning. Mobile technology isn't just a medium to deliver information to people, but an acceleration of data production for analysis too. By 2020, pretty much everyone and an increasing number of things will be connected to the Internet. There are 7 billion people on the planet today. Intel predicts that by 2020, more than 31 billion devices will be connected to the Internet. So, that's not just devices used by people directly to consume or share information. More and more things will be put online and communicate their state: cars, fridges, lampposts, shoes, rubbish bins, pets, plants, heating systems—you name it. These devices will generate a huge amount of data from sensors that monitor all kinds of measurable attributes: temperature, velocity, direction, orientation, and time. This means an increasing opportunity to understand a huge gamut of data, but without the right technology and approaches it will be complex to analyze what is going on. Old methods of analysis won't work, as they don't move quickly enough. The variety and volume of information that can be analyzed will explode at an exponential rate. The rise of this type of big data makes us redefine how we build, deliver, and even promote analytics. It is an opportunity for those organizations that can exploit it through analysis; this can sort the signals from the noise and make sense of the patterns in the data. Qlik Sense is designed as just such a signal booster; it takes how users can zoom and pan through information too large for them to easily understand the product. Unbound Human-Computer Interaction We touched on the boundary between the computing power and the humans using it in the previous section. Increasingly, we're removing barriers between humans and technology. Take the rise of touch devices. Users don't want to just view data presented to them in a static form. Instead, they want to "feel" the data and interact with it. The same is increasingly true of BI. The adoption of BI tools has been too low because the technology has been hard to use. Adoption has been low because in the past BI tools often required people to conform to the tool's way of working, rather than reflecting the user's way of thinking. The aspiration for Qlik Sense (when part of the QlikView.Next project) was that the software should be both "gorgeous and genius". The genius part obviously refers to the built-in intelligence, the smarts, the software will have. The gorgeous part is misunderstood or at least oversimplified. Yes, it means cosmetically attractive (which is important) but much more importantly, it means enjoyable to use and experience. In other words, Qlik Sense should never be jarring to users but seamless, perhaps almost transparent to them, inducing a state of mental flow that encourages thinking about the question being considered rather than the tool used to answer it. The aim was to be of most value to people. Qlik Sense will empower users to explore their data and uncover hidden insights, naturally. Evolving customer requirements It is not only the external market drivers that impact how we use information. Our organizations and the people that work within them are also changing in their attitude towards technology, how they express ideas through data, and how increasingly they make use of data as a competitive weapon. Consumerization of BI and the rise of self-service The consumerization of any technology space is all about how enterprises are affected by, and can take advantage of, new technologies and models that originate and develop in the consumer marker, rather than in the enterprise IT sector. The reality is that individuals react quicker than enterprises to changes in technology. As such, consumerization cannot be stopped, nor is it something to be adopted. It can be embraced. While it's not viable to build a BI strategy around consumerization alone, its impact must be considered. Consumerization makes itself felt in three areas: Technology: Most investment in innovation occurs in the consumer space first, with enterprise vendors incorporating consumer-derived features after the fact. (Think about how vendors added the browser as a UI for business software applications.) Economics: Consumer offerings are often less expensive or free (to try) with a low barrier of entry. This drives prices down, including enterprise sectors, and alters selection behavior. People: Demographics, which is the flow of Millennial Generation into the workplace, and the blurring of home/work boundaries and roles, which may be seen from a traditional IT perspective as rogue users, with demands to BYOPC or device. In line with consumerization, BI users want to be able to pick up and just use the technology to create and share engaging solutions; they don't want to read the manual. This places a high degree of importance on the Human-Computer Interaction (HCI) aspects of a BI product (refer to the preceding list) and governed access to information and deployment design. Add mobility to this and you get a brand new sourcing and adoption dynamic in BI, one that Qlik engendered, and Qlik Sense is designed to take advantage of. Think about how Qlik Sense Desktop was made available as a freemium offer. Information as an asset and differentiator As times change, so do differentiators. For example, car manufacturers in the 1980s differentiated themselves based on reliability, making sure their cars started every single time. Today, we expect that our cars will start; reliability is now a commodity. The same is true for ERP systems. Originally, companies implemented ERPs to improve reliability, but in today's post-ERP world, companies are shifting to differentiating their businesses based on information. This means our focus changes from apps to analytics. And analytics apps, like those delivered by Qlik Sense, help companies access the data they need to set themselves apart from the competition. However, to get maximum return from information, the analysis must be delivered fast enough, and in sync with the operational tempo people need. Things are speeding up all the time. For example, take the fashion industry. Large mainstream fashion retailers used to work two seasons per year. Those that stuck to that were destroyed by fast fashion retailers. The same is true for old style, system-of-record BI tools; they just can't cope with today's demands for speed and agility. The rise of information activism A new, tech-savvy generation is entering the workforce, and their expectations are different than those of past generations. The Beloit College Mindset List for the entering class of 2017 gives the perspective of students entering college this year, how they see the world, and the reality they've known all their lives. For this year's freshman class, Java has never been just a cup of coffee and a tablet is no longer something you take in the morning. This new generation of workers grew up with the Internet and is less likely to be passive with data. They bring their own devices everywhere they go, and expect it to be easy to mash-up data, communicate, and collaborate with their peers. The evolution and elevation of the role of IT We've all read about how the role of IT is changing, and the question CIOs today must ask themselves is: "How do we drive innovation?". IT must transform from being gatekeepers (doers) to storekeepers (enablers), providing business users with self-service tools they need to be successful. However, to achieve this transformation, they need to stock helpful tools and provide consumable information products or apps. Qlik Sense is a key part of the armory that IT needs to provide to be successful in this transformation. Summary In this article, we looked at the factors that provide the wider context for the use of Qlik Sense. The factors covered arise out of both increasing technical capability and demands to compete in a globalized, information-centric world, where out-analyzing your competitors is a key success factor. Resources for Article: Further resources on this subject: Securing QlikView Documents [article] Conozca QlikView [article] Introducing QlikView elements [article]
Read more
  • 0
  • 0
  • 1291

article-image-postgresql-cookbook-high-availability-and-replication
Packt
06 Feb 2015
26 min read
Save for later

PostgreSQL Cookbook - High Availability and Replication

Packt
06 Feb 2015
26 min read
In this article by Chitij Chauhan, author of the book PostgreSQL Cookbook, we will talk about various high availability and replication solutions, including some popular third-party replication tools such as Slony-I and Londiste. In this article, we will cover the following recipes: Setting up hot streaming replication Replication using Slony-I Replication using Londiste The important components for any production database is to achieve fault tolerance, 24/7 availability, and redundancy. It is for this purpose that we have different high availability and replication solutions available for PostgreSQL. From a business perspective, it is important to ensure 24/7 data availability in the event of a disaster situation or a database crash due to disk or hardware failure. In such situations, it becomes critical to ensure that a duplicate copy of the data is available on a different server or a different database, so that seamless failover can be achieved even when the primary server/database is unavailable. Setting up hot streaming replication In this recipe, we are going to set up a master-slave streaming replication. Getting ready For this exercise, you will need two Linux machines, each with the latest version of PostgreSQL installed. We will be using the following IP addresses for the master and slave servers: Master IP address: 192.168.0.4 Slave IP address: 192.168.0.5 Before you start with the master-slave streaming setup, it is important that the SSH connectivity between the master and slave is setup. How to do it... Perform the following sequence of steps to set up a master-slave streaming replication: First, we are going to create a user on the master, which will be used by the slave server to connect to the PostgreSQL database on the master server: psql -c "CREATE USER repuser REPLICATION LOGIN ENCRYPTED PASSWORD 'charlie';" Next, we will allow the replication user that was created in the previous step to allow access to the master PostgreSQL server. This is done by making the necessary changes as mentioned in the pg_hba.conf file: Vi pg_hba.conf host   replication   repuser   192.168.0.5/32   md5 In the next step, we are going to configure parameters in the postgresql.conf file. These parameters need to be set in order to get the streaming replication working: Vi /var/lib/pgsql/9.3/data/postgresql.conf listen_addresses = '*' wal_level = hot_standby max_wal_senders = 3 wal_keep_segments = 8 archive_mode = on       archive_command = 'cp %p /var/lib/pgsql/archive/%f && scp %p [email protected]:/var/lib/pgsql/archive/%f' checkpoint_segments = 8 Once the parameter changes have been made in the postgresql.conf file in the previous step, the next step will be to restart the PostgreSQL server on the master server, in order to let the changes take effect: pg_ctl -D /var/lib/pgsql/9.3/data restart Before the slave can replicate the master, we will need to give it the initial database to build off. For this purpose, we will make a base backup by copying the primary server's data directory to the standby. The rsync command needs to be run as a root user: psql -U postgres -h 192.168.0.4 -c "SELECT pg_start_backup('label', true)" rsync -a /var/lib/pgsql/9.3/data/ 192.168.0.5:/var/lib/pgsql/9.3/data/ --exclude postmaster.pid psql -U postgres -h 192.168.0.4 -c "SELECT pg_stop_backup()" Once the data directory, mentioned in the previous step, is populated, the next step is to enable the following parameter in the postgresql.conf file on the slave server: hot_standby = on The next step will be to copy the recovery.conf.sample file in the $PGDATA location on the slave server and then configure the following parameters: cp /usr/pgsql-9.3/share/recovery.conf.sample /var/lib/pgsql/9.3/data/recovery.conf standby_mode = on primary_conninfo = 'host=192.168.0.4 port=5432 user=repuser password=charlie' trigger_file = '/tmp/trigger.replication′ restore_command = 'cp /var/lib/pgsql/archive/%f "%p"' The next step will be to start the slave server: service postgresql-9.3 start Now that the above mentioned replication steps are set up, we will test for replication. On the master server, log in and issue the following SQL commands: psql -h 192.168.0.4 -d postgres -U postgres -W postgres=# create database test;   postgres=# c test;   test=# create table testtable ( testint int, testchar varchar(40) );   CREATE TABLE test=# insert into testtable values ( 1, 'What A Sight.' ); INSERT 0 1 On the slave server, we will now check whether the newly created database and the corresponding table, created in the previous step, are replicated: psql -h 192.168.0.5 -d test -U postgres -W test=# select * from testtable; testint | testchar ---------+--------------------------- 1 | What A Sight. (1 row) How it works... The following is the explanation for the steps performed in the preceding section. In the initial step of the preceding section, we create a user called repuser, which will be used by the slave server to make a connection to the primary server. In the second step of the preceding section, we make the necessary changes in the pg_hba.conf file to allow the master server to be accessed by the slave server using the repuser user ID that was created in step 1. We then make the necessary parameter changes on the master in step 3 of the preceding section to configure a streaming replication. The following is a description of these parameters: listen_addresses: This parameter is used to provide the IP address associated with the interface that you want to have PostgreSQL listen to. A value of * indicates all available IP addresses. wal_level: This parameter determines the level of WAL logging done. Specify hot_standby for streaming replication. wal_keep_segments: This parameter specifies the number of 16 MB WAL files to be retained in the pg_xlog directory. The rule of thumb is that more such files might be required to handle a large checkpoint. archive_mode: Setting this parameter enables completed WAL segments to be sent to the archive storage. archive_command: This parameter is basically a shell command that is executed whenever a WAL segment is completed. In our case, we are basically copying the file to the local machine and then using the secure copy command to send it across to the slave. max_wal_senders: This parameter specifies the total number of concurrent connections allowed from the slave servers. checkpoint_segments: This parameter specifies the maximum number of logfile segments between automatic WAL checkpoints. Once the necessary configuration changes have been made on the master server, we then restart the PostgreSQL server on the master in order to let the new configuration changes take effect. This is done in step 4 of the preceding section. In step 5 of the preceding section, we are basically building the slave by copying the primary server's data directory to the slave. Now, with the data directory available on the slave, the next step is to configure it. We will now make the necessary parameter replication related parameter changes on the slave in the postgresql.conf directory on the slave server. We set the following parameters on the slave: hot_standby: This parameter determines whether you can connect and run queries when the server is in the archive recovery or standby mode. In the next step, we are configuring the recovery.conf file. This is required to be set up so that the slave can start receiving logs from the master. The parameters explained next are configured in the recovery.conf file on the slave. standby_mode: This parameter, when enabled, causes PostgreSQL to work as a standby in a replication configuration. primary_conninfo: This parameter specifies the connection information used by the slave to connect to the master. For our scenario, our master server is set as 192.168.0.4 on port 5432 and we are using the repuser userid with the password charlie to make a connection to the master. Remember that repuser was the userid which was created in the initial step of the preceding section for this purpose, that is, connecting to the master from the slave. trigger_file: When a slave is configured as a standby, it will continue to restore the XLOG records from the master. The trigger_file parameter specifies what is used to trigger a slave, in order to switch over its duties from standby and take over as master or primary server. At this stage, the slave is fully configured now and we can start the slave server; then, the replication process begins. This is shown in step 8 of the preceding section. In steps 9 and 10 of the preceding section, we are simply testing our replication. We first begin by creating a test database, then we log in to the test database and create a table by the name testtable, and then we begin inserting some records into the testtable table. Now, our purpose is to see whether these changes are replicated across the slave. To test this, we log in to the slave on the test database and then query the records from the testtable table, as seen in step 10 of the preceding section. The final result that we see is that all the records that are changed/inserted on the primary server are visible on the slave. This completes our streaming replication's setup and configuration. Replication using Slony-I Here, we are going to set up replication using Slony-I. We will be setting up the replication of table data between two databases on the same server. Getting ready The steps performed in this recipe are carried out on a CentOS Version 6 machine. It is also important to remove the directives related to hot streaming replication prior to setting up replication using Slony-I. We will first need to install Slony-I. The following steps need to be performed in order to install Slony-I: First, go to http://slony.info/downloads/2.2/source/ and download the given software. Once you have downloaded the Slony-I software, the next step is to unzip the .tar file and then go the newly created directory. Before doing this, please ensure that you have the postgresql-devel package for the corresponding PostgreSQL version installed before you install Slony-I: tar xvfj slony1-2.2.3.tar.bz2  cd slony1-2.2.3 In the next step, we are going to configure, compile, and build the software: ./configure --with-pgconfigdir=/usr/pgsql-9.3/bin/ make make install How to do it... You need to perform the following sequence of steps, in order to replicate data between two tables using Slony-I replication: First, start the PostgreSQL server if you have not already started it: pg_ctl -D $PGDATA start In the next step, we will be creating two databases, test1 and test2, which will be used as the source and target databases respectively: createdb test1 createdb test2 In the next step, we will create the t_test table on the source database, test1, and insert some records into it: psql -d test1 test1=# create table t_test (id numeric primary key, name varchar);   test1=# insert into t_test values(1,'A'),(2,'B'), (3,'C'); We will now set up the target database by copying the table definitions from the test1 source database: pg_dump -s -p 5432 -h localhost test1 | psql -h localhost -p 5432 test2 We will now connect to the target database, test2, and verify that there is no data in the tables of the test2 database: psql -d test2 test2=# select * from t_test; We will now set up a slonik script for the master-slave, that is source/target, setup. In this scenario, since we are replicating between two different databases on the same server, the only different connection string option will be the database name: cd /usr/pgsql-9.3/bin vi init_master.slonik   #!/bin/sh cluster name = mycluster; node 1 admin conninfo = 'dbname=test1 host=localhost port=5432 user=postgres password=postgres'; node 2 admin conninfo = 'dbname=test2 host=localhost port=5432 user=postgres password=postgres'; init cluster ( id=1); create set (id=1, origin=1); set add table(set id=1, origin=1, id=1, fully qualified name = 'public.t_test'); store node (id=2, event node = 1); store path (server=1, client=2, conninfo='dbname=test1 host=localhost port=5432 user=postgres password=postgres'); store path (server=2, client=1, conninfo='dbname=test2 host=localhost port=5432 user=postgres password=postgres'); store listen (origin=1, provider = 1, receiver = 2);  store listen (origin=2, provider = 2, receiver = 1); We will now create a slonik script for subscription to the slave, that is, target: cd /usr/pgsql-9.3/bin vi init_slave.slonik #!/bin/sh cluster name = mycluster; node 1 admin conninfo = 'dbname=test1 host=localhost port=5432 user=postgres password=postgres'; node 2 admin conninfo = 'dbname=test2 host=localhost port=5432 user=postgres password=postgres'; subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); We will now run the init_master.slonik script created in step 6 and run this on the master, as follows: cd /usr/pgsql-9.3/bin   slonik init_master.slonik We will now run the init_slave.slonik script created in step 7 and run this on the slave, that is, target: cd /usr/pgsql-9.3/bin   slonik init_slave.slonik In the next step, we will start the master slon daemon: nohup slon mycluster "dbname=test1 host=localhost port=5432 user=postgres password=postgres" & In the next step, we will start the slave slon daemon: nohup slon mycluster "dbname=test2 host=localhost port=5432 user=postgres password=postgres" & Next, we will connect to the master, that is, the test1 source database, and insert some records in the t_test table: psql -d test1 test1=# insert into t_test values (5,'E'); We will now test for the replication by logging on to the slave, that is, the test2 target database, and see whether the inserted records in the t_test table are visible: psql -d test2   test2=# select * from t_test; id | name ----+------ 1 | A 2 | B 3 | C 5 | E (4 rows) How it works... We will now discuss the steps performed in the preceding section: In step 1, we first start the PostgreSQL server if it is not already started. In step 2, we create two databases, namely test1 and test2, that will serve as our source (master) and target (slave) databases. In step 3, we log in to the test1 source database, create a t_test table, and insert some records into the table. In step 4, we set up the target database, test2, by copying the table definitions present in the source database and loading them into test2 using the pg_dump utility. In step 5, we log in to the target database, test2, and verify that there are no records present in the t_test table because in step 4, we only extracted the table definitions into the test2 database from the test1 database. In step 6, we set up a slonik script for the master-slave replication setup. In the init_master.slonik file, we first define the cluster name as mycluster. We then define the nodes in the cluster. Each node will have a number associated with a connection string, which contains database connection information. The node entry is defined both for the source and target databases. The store_path commands are necessary, so that each node knows how to communicate with the other. In step 7, we set up a slonik script for the subscription of the slave, that is, the test2 target database. Once again, the script contains information such as the cluster name and the node entries that are designated a unique number related to connection string information. It also contains a subscriber set. In step 8, we run the init_master.slonik file on the master. Similarly, in step 9, we run the init_slave.slonik file on the slave. In step 10, we start the master slon daemon. In step 11, we start the slave slon daemon. The subsequent steps, 12 and 13, are used to test for replication. For this purpose, in step 12 of the preceding section, we first log in to the test1 source database and insert some records into the t_test table. To check whether the newly inserted records have been replicated in the target database, test2, we log in to the test2 database in step 13. The result set obtained from the output of the query confirms that the changed/inserted records on the t_test table in the test1 database are successfully replicated across the target database, test2. For more information on Slony-I replication, go to http://slony.info/documentation/tutorial.html. There's more... If you are using Slony-I for replication between two different servers, in addition to the steps mentioned in the How to do it… section, you will also have to enable authentication information in the pg_hba.conf file existing on both the source and target servers. For example, let's assume that the source server's IP is 192.168.16.44 and the target server's IP is 192.168.16.56 and we are using a user named super to replicate the data. If this is the situation, then in the source server's pg_hba.conf file, we will have to enter the information, as follows: host         postgres         super     192.168.16.44/32           md5 Similarly, in the target server's pg_hba.conf file, we will have to enter the authentication information, as follows: host         postgres         super     192.168.16.56/32           md5 Also, in the shell scripts that were used for Slony-I, wherever the connection information for the host is localhost that entry will need to be replaced by the source and target server's IP addresses. Replication using Londiste In this recipe, we are going to show you how to replicate data using Londiste. Getting ready For this setup, we are using the same host CentOS Linux machine to replicate data between two databases. This can also be set up using two separate Linux machines running on VMware, VirtualBox, or any other virtualization software. It is assumed that the latest version of PostgreSQL, version 9.3, is installed. We used CentOS Version 6 as the Linux operating system for this exercise. To set up Londiste replication on the Linux machine, perform the following steps: Go to http://pgfoundry.org/projects/skytools/ and download the latest version of Skytools 3.2, that is, tarball skytools-3.2.tar.gz. Extract the tarball file, as follows: tar -xvzf skytools-3.2.tar.gz Go to the new location and build and compile the software: cd skytools-3.2 ./configure --prefix=/var/lib/pgsql/9.3/Sky –with-pgconfig=/usr/pgsql-9.3/bin/pg_config   make   make install Also, set the PYTHONPATH environment variable, as shown here. Alternatively, you can also set it in the .bash_profile script: export PYTHONPATH=/opt/PostgreSQL/9.2/Sky/lib64/python2.6/site-packages/ How to do it... We are going to perform the following sequence of steps to set up replication between two different databases using Londiste. First, create the two databases between which replication has to occur: createdb node1 createdb node2 Populate the node1 database with data using the pgbench utility: pgbench -i -s 2 -F 80 node1 Add any primary key and foreign keys to the tables in the node1 database that are needed for replication. Create the following .sql file and add the following lines to it: Vi /tmp/prepare_pgbenchdb_for_londiste.sql -- add primary key to history table ALTER TABLE pgbench_history ADD COLUMN hid SERIAL PRIMARY KEY;   -- add foreign keys ALTER TABLE pgbench_tellers ADD CONSTRAINT pgbench_tellers_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches; ALTER TABLE pgbench_accounts ADD CONSTRAINT pgbench_accounts_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches; ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches; ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_tellers_fk FOREIGN KEY(tid) REFERENCES pgbench_tellers; ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_accounts_fk FOREIGN KEY(aid) REFERENCES pgbench_accounts; We will now load the .sql file created in the previous step and load it into the database: psql node1 -f /tmp/prepare_pgbenchdb_for_londiste.sql We will now populate the node2 database with table definitions from the tables in the node1 database: pg_dump -s -t 'pgbench*' node1 > /tmp/tables.sql psql -f /tmp/tables.sql node2 Now starts the process of replication. We will first create the londiste.ini configuration file with the following parameters in order to set up the root node for the source database, node1: Vi londiste.ini   [londiste3] job_name = first_table db = dbname=node1 queue_name = replication_queue logfile = /home/postgres/log/londiste.log pidfile = /home/postgres/pid/londiste.pid In the next step, we are going to use the londiste.ini configuration file created in the previous step to set up the root node for the node1 database, as shown here: [postgres@localhost bin]$ ./londiste3 londiste3.ini create-root node1 dbname=node1   2014-12-09 18:54:34,723 2335 WARNING No host= in public connect string, bad idea 2014-12-09 18:54:35,210 2335 INFO plpgsql is installed 2014-12-09 18:54:35,217 2335 INFO pgq is installed 2014-12-09 18:54:35,225 2335 INFO pgq.get_batch_cursor is installed 2014-12-09 18:54:35,227 2335 INFO pgq_ext is installed 2014-12-09 18:54:35,228 2335 INFO pgq_node is installed 2014-12-09 18:54:35,230 2335 INFO londiste is installed 2014-12-09 18:54:35,232 2335 INFO londiste.global_add_table is installed 2014-12-09 18:54:35,281 2335 INFO Initializing node 2014-12-09 18:54:35,285 2335 INFO Location registered 2014-12-09 18:54:35,447 2335 INFO Node "node1" initialized for queue "replication_queue" with type "root" 2014-12-09 18:54:35,465 2335 INFO Don We will now run the worker daemon for the root node: [postgres@localhost bin]$ ./londiste3 londiste3.ini worker 2014-12-09 18:55:17,008 2342 INFO Consumer uptodate = 1 In the next step, we will create a slave.ini configuration file in order to make a leaf node for the node2 target database: Vi slave.ini [londiste3] job_name = first_table_slave db = dbname=node2 queue_name = replication_queue logfile = /home/postgres/log/londiste_slave.log pidfile = /home/postgres/pid/londiste_slave.pid We will now initialize the node in the target database: ./londiste3 slave.ini create-leaf node2 dbname=node2 –provider=dbname=node1 2014-12-09 18:57:22,769 2408 WARNING No host= in public connect string, bad idea 2014-12-09 18:57:22,778 2408 INFO plpgsql is installed 2014-12-09 18:57:22,778 2408 INFO Installing pgq 2014-12-09 18:57:22,778 2408 INFO   Reading from /var/lib/pgsql/9.3/Sky/share/skytools3/pgq.sql 2014-12-09 18:57:23,211 2408 INFO pgq.get_batch_cursor is installed 2014-12-09 18:57:23,212 2408 INFO Installing pgq_ext 2014-12-09 18:57:23,213 2408 INFO   Reading from /var/lib/pgsql/9.3/Sky/share/skytools3/pgq_ext.sql 2014-12-09 18:57:23,454 2408 INFO Installing pgq_node 2014-12-09 18:57:23,455 2408 INFO   Reading from /var/lib/pgsql/9.3/Sky/share/skytools3/pgq_node.sql 2014-12-09 18:57:23,729 2408 INFO Installing londiste 2014-12-09 18:57:23,730 2408 INFO   Reading from /var/lib/pgsql/9.3/Sky/share/skytools3/londiste.sql 2014-12-09 18:57:24,391 2408 INFO londiste.global_add_table is installed 2014-12-09 18:57:24,575 2408 INFO Initializing node 2014-12-09 18:57:24,705 2408 INFO Location registered 2014-12-09 18:57:24,715 2408 INFO Location registered 2014-12-09 18:57:24,744 2408 INFO Subscriber registered: node2 2014-12-09 18:57:24,748 2408 INFO Location registered 2014-12-09 18:57:24,750 2408 INFO Location registered 2014-12-09 18:57:24,757 2408 INFO Node "node2" initialized for queue "replication_queue" with type "leaf" 2014-12-09 18:57:24,761 2408 INFO Done We will now launch the worker daemon for the target database, that is, node2: [postgres@localhost bin]$ ./londiste3 slave.ini worker 2014-12-09 18:58:53,411 2423 INFO Consumer uptodate = 1 We will now create the configuration file, that is pgqd.ini, for the ticker daemon: vi pgqd.ini   [pgqd] logfile = /home/postgres/log/pgqd.log pidfile = /home/postgres/pid/pgqd.pid Using the configuration file created in the previous step, we will launch the ticker daemon: [postgres@localhost bin]$ ./pgqd pgqd.ini 2014-12-09 19:05:56.843 2542 LOG Starting pgqd 3.2 2014-12-09 19:05:56.844 2542 LOG auto-detecting dbs ... 2014-12-09 19:05:57.257 2542 LOG node1: pgq version ok: 3.2 2014-12-09 19:05:58.130 2542 LOG node2: pgq version ok: 3.2 We will now add all the tables to the replication on the root node: [postgres@localhost bin]$ ./londiste3 londiste3.ini add-table --all 2014-12-09 19:07:26,064 2614 INFO Table added: public.pgbench_accounts 2014-12-09 19:07:26,161 2614 INFO Table added: public.pgbench_branches 2014-12-09 19:07:26,238 2614 INFO Table added: public.pgbench_history 2014-12-09 19:07:26,287 2614 INFO Table added: public.pgbench_tellers Similarly, add all the tables to the replication on the leaf node: [postgres@localhost bin]$ ./londiste3 slave.ini add-table –all We will now generate some traffic on the node1 source database: pgbench -T 10 -c 5 node1 We will now use the compare utility available with the londiste3 command to check the tables in both the nodes; that is, both the source database (node1) and destination database (node2) have the same amount of data: [postgres@localhost bin]$ ./londiste3 slave.ini compare   2014-12-09 19:26:16,421 2982 INFO Checking if node1 can be used for copy 2014-12-09 19:26:16,424 2982 INFO Node node1 seems good source, using it 2014-12-09 19:26:16,425 2982 INFO public.pgbench_accounts: Using node node1 as provider 2014-12-09 19:26:16,441 2982 INFO Provider: node1 (root) 2014-12-09 19:26:16,446 2982 INFO Locking public.pgbench_accounts 2014-12-09 19:26:16,447 2982 INFO Syncing public.pgbench_accounts 2014-12-09 19:26:18,975 2982 INFO Counting public.pgbench_accounts 2014-12-09 19:26:19,401 2982 INFO srcdb: 200000 rows, checksum=167607238449 2014-12-09 19:26:19,706 2982 INFO dstdb: 200000 rows, checksum=167607238449 2014-12-09 19:26:19,715 2982 INFO Checking if node1 can be used for copy 2014-12-09 19:26:19,716 2982 INFO Node node1 seems good source, using it 2014-12-09 19:26:19,716 2982 INFO public.pgbench_branches: Using node node1 as provider 2014-12-09 19:26:19,730 2982 INFO Provider: node1 (root) 2014-12-09 19:26:19,734 2982 INFO Locking public.pgbench_branches 2014-12-09 19:26:19,734 2982 INFO Syncing public.pgbench_branches 2014-12-09 19:26:22,772 2982 INFO Counting public.pgbench_branches 2014-12-09 19:26:22,804 2982 INFO srcdb: 2 rows, checksum=-3078609798 2014-12-09 19:26:22,812 2982 INFO dstdb: 2 rows, checksum=-3078609798 2014-12-09 19:26:22,866 2982 INFO Checking if node1 can be used for copy 2014-12-09 19:26:22,877 2982 INFO Node node1 seems good source, using it 2014-12-09 19:26:22,878 2982 INFO public.pgbench_history: Using node node1 as provider 2014-12-09 19:26:22,919 2982 INFO Provider: node1 (root) 2014-12-09 19:26:22,931 2982 INFO Locking public.pgbench_history 2014-12-09 19:26:22,932 2982 INFO Syncing public.pgbench_history 2014-12-09 19:26:25,963 2982 INFO Counting public.pgbench_history 2014-12-09 19:26:26,008 2982 INFO srcdb: 715 rows, checksum=9467587272 2014-12-09 19:26:26,020 2982 INFO dstdb: 715 rows, checksum=9467587272 2014-12-09 19:26:26,056 2982 INFO Checking if node1 can be used for copy 2014-12-09 19:26:26,063 2982 INFO Node node1 seems good source, using it 2014-12-09 19:26:26,064 2982 INFO public.pgbench_tellers: Using node node1 as provider 2014-12-09 19:26:26,100 2982 INFO Provider: node1 (root) 2014-12-09 19:26:26,108 2982 INFO Locking public.pgbench_tellers 2014-12-09 19:26:26,109 2982 INFO Syncing public.pgbench_tellers 2014-12-09 19:26:29,144 2982 INFO Counting public.pgbench_tellers 2014-12-09 19:26:29,176 2982 INFO srcdb: 20 rows, checksum=4814381032 2014-12-09 19:26:29,182 2982 INFO dstdb: 20 rows, checksum=4814381032 How it works... The following is an explanation of the steps performed in the preceding section: Initially, in step 1, we create two databases, that is node1 and node2, that are used as the source and target databases, respectively, from a replication perspective. In step 2, we populate the node1 database using the pgbench utility. In step 3 of the preceding section, we add and define the respective primary key and foreign key relationships on different tables and put these DDL commands in a .sql file. In step 4, we execute these DDL commands stated in step 3 on the node1 database; thus, in this way, we force the primary key and foreign key definitions on the tables in the pgbench schema in the node1 database. In step 5, we extract the table definitions from the tables in the pgbench schema in the node1 database and load these definitions in the node2 database. We will now discuss steps 6 to 8 of the preceding section. In step 6, we create the configuration file, which is then used in step 7 to create the root node for the node1 source database. In step 8, we will launch the worker daemon for the root node. Regarding the entries mentioned in the configuration file in step 6, we first define a job that must have a name, so that distinguished processes can be easily identified. Then, we define a connect string with information to connect to the source database, that is node1, and then we define the name of the replication queue involved. Finally, we define the location of the log and pid files. We will now discuss steps 9 to 11 of the preceding section. In step 9, we define the configuration file, which is then used in step 10 to create the leaf node for the target database, that is node2. In step 11, we launch the worker daemon for the leaf node. The entries in the configuration file in step 9 contain the job_name connect string in order to connect to the target database, that is node2, the name of the replication queue involved, and the location of log and pid involved. The key part in step 11 is played by the slave, that is the target database—to find the master or provider, that is source database node1. We will now talk about steps 12 and 13 of the preceding section. In step 12, we define the ticker configuration, with the help of which we launch the ticker process mentioned in step 13. Once the ticker daemon has started successfully, we have all the components and processes setup and needed for replication; however, we have not yet defined what the system needs to replicate. In step 14 and 15, we define the tables to the replication that is set on both the source and target databases, that is node1 and node2, respectively. Finally, we will talk about steps 16 and 17 of the preceding section. Here, at this stage, we are testing the replication that was set up between the node1 source database and the node2 target database. In step 16, we generate some traffic on the node1 source database by running pgbench with five parallel database connections and generating traffic for 10 seconds. In step 17, we check whether the tables on both the source and target databases have the same data. For this purpose, we use the compare command on the provider and subscriber nodes and then count and checksum the rows on both sides. A partial output from the preceding section tells you that the data has been successfully replicated between all the tables that are part of the replication set up between the node1 source database and the node2 destination database, as the count and checksum of rows for all the tables on the source and target destination databases are matching: 2014-12-09 19:26:18,975 2982 INFO Counting public.pgbench_accounts 2014-12-09 19:26:19,401 2982 INFO srcdb: 200000 rows, checksum=167607238449 2014-12-09 19:26:19,706 2982 INFO dstdb: 200000 rows, checksum=167607238449   2014-12-09 19:26:22,772 2982 INFO Counting public.pgbench_branches 2014-12-09 19:26:22,804 2982 INFO srcdb: 2 rows, checksum=-3078609798 2014-12-09 19:26:22,812 2982 INFO dstdb: 2 rows, checksum=-3078609798   2014-12-09 19:26:25,963 2982 INFO Counting public.pgbench_history 2014-12-09 19:26:26,008 2982 INFO srcdb: 715 rows, checksum=9467587272 2014-12-09 19:26:26,020 2982 INFO dstdb: 715 rows, checksum=9467587272   2014-12-09 19:26:29,144 2982 INFO Counting public.pgbench_tellers 2014-12-09 19:26:29,176 2982 INFO srcdb: 20 rows, checksum=4814381032 2014-12-09 19:26:29,182 2982 INFO dstdb: 20 rows, checksum=4814381032 Summary This article demonstrates the high availability and replication concepts in PostgreSQL. After reading this chapter, you will be able to implement high availability and replication options using different techniques including streaming replication, Slony-I replication and replication using Longdiste. Resources for Article: Further resources on this subject: Running a PostgreSQL Database Server [article] Securing the WAL Stream [article] Recursive queries [article]
Read more
  • 0
  • 0
  • 3884

article-image-warming
Packt
06 Feb 2015
11 min read
Save for later

Warming Up

Packt
06 Feb 2015
11 min read
In this article by Bater Makhabel, author of Learning Data Mining with R, you will learn basic data mining terms such as data definition, preprocessing, and so on. (For more resources related to this topic, see here.) The most important data mining algorithms will be illustrated with R to help you grasp the principles quickly, including but not limited to, classification, clustering, and outlier detection. Before diving right into data mining, let's have a look at the topics we'll cover: Data mining Social network mining In the history of humankind, the results of data from every aspect is extensive, for example websites, social networks by user's e-mail or name or account, search terms, locations on map, companies, IP addresses, books, films, music, and products. Data mining techniques can be applied to any kind of old or emerging data; each data type can be best dealt with using certain, but not all, techniques. In other words, the data mining techniques are constrained by data type, size of the dataset, context of the tasks applied, and so on. Every dataset has its own appropriate data mining solutions. New data mining techniques always need to be researched along with new data types once the old techniques cannot be applied to it or if the new data type cannot be transformed onto the traditional data types. The evolution of stream mining algorithms applied to Twitter's huge source set is one typical example. The graph mining algorithms developed for social networks is another example. The most popular and basic forms of data are from databases, data warehouses, ordered/sequence data, graph data, text data, and so on. In other words, they are federated data, high dimensional data, longitudinal data, streaming data, web data, numeric, categorical, or text data. Big data Big data is large amount of data that does not fit in the memory of a single machine. In other words, the size of data itself becomes a part of the issue when studying it. Besides volume, two other major characteristics of big data are variety and velocity; these are the famous three Vs of big data. Velocity means data process rate or how fast the data is being processed. Variety denotes various data source types. Noises arise more frequently in big data source sets and affect the mining results, which require efficient data preprocessing algorithms. As a result, distributed filesystems are used as tools for successful implementation of parallel algorithms on large amounts of data; it is a certainty that we will get even more data with each passing second. Data analytics and visualization techniques are the primary factors of the data mining tasks related to massive data. Some data types that are important to big data are as follows: The data from the camera video, which includes more metadata for analysis to expedite crime investigations, enhanced retail analysis, military intelligence, and so on. The second data type is from embedded sensors, such as medical sensors, to monitor any potential outbreaks of virus. The third data type is from entertainment, information freely published through social media by anyone. The last data type is consumer images, aggregated from social media, and tagging on these like images are important. Here is a table illustrating the history of data size growth. It shows that information will be more than double every two years, changing the way researchers or companies manage and extract value through data mining techniques from data, revealing new data mining studies. Year Data Sizes Comments N/A   1 MB (Megabyte) = 220. The human brain holds about 200 MB of information. N/A   1 PB (Petabyte) = 250. It is similar to the size of 3 years' observation data for Earth by NASA and is equivalent of 70.8 times the books in America's Library of Congress. 1999 1 EB 1 EB (Exabyte) = 260. The world produced 1.5 EB of unique information. 2007 281 EB The world produced about 281 Exabyte of unique information. 2011 1.8 ZB 1 ZB (Zetabyte)= 270. This is all data gathered by human beings in 2011. Very soon   1 YB(Yottabytes)= 280. Scalability and efficiency Efficiency, scalability, performance, optimization, and the ability to perform in real time are important issues for almost any algorithms, and it is the same for data mining. There are always necessary metrics or benchmark factors of data mining algorithms. As the amount of data continues to grow, keeping data mining algorithms effective and scalable is necessary to effectively extract information from massive datasets in many data repositories or data streams. The storage of data from a single machine to wide distribution, the huge size of many datasets, and the computational complexity of the data mining methods are all factors that drive the development of parallel and distributed data-intensive mining algorithms. Data source Data serves as the input for the data mining system and data repositories are important. In an enterprise environment, database and logfiles are common sources. In web data mining, web pages are the source of data. The data that continuously fetched various sensors are also a typical data source. Here are some free online data sources particularly helpful to learn about data mining: Frequent Itemset Mining Dataset Repository: A repository with datasets for methods to find frequent itemsets (http://fimi.ua.ac.be/data/). UCI Machine Learning Repository: This is a collection of dataset, suitable for classification tasks (http://archive.ics.uci.edu/ml/). The Data and Story Library at statlib: DASL (pronounced "dazzle") is an online library of data files and stories that illustrate the use of basic statistics methods. We hope to provide data from a wide variety of topics so that statistics teachers can find real-world examples that will be interesting to their students. Use DASL's powerful search engine to locate the story or data file of interest. (http://lib.stat.cmu.edu/DASL/) WordNet: This is a lexical database for English (http://wordnet.princeton.edu) Data mining Data mining is the discovery of a model in data; it's also called exploratory data analysis, and discovers useful, valid, unexpected, and understandable knowledge from the data. Some goals are shared with other sciences, such as statistics, artificial intelligence, machine learning, and pattern recognition. Data mining has been frequently treated as an algorithmic problem in most cases. Clustering, classification, association rule learning, anomaly detection, regression, and summarization are all part of the tasks belonging to data mining. The data mining methods can be summarized into two main categories of data mining problems: feature extraction and summarization. Feature extraction This is to extract the most prominent features of the data and ignore the rest. Here are some examples: Frequent itemsets: This model makes sense for data that consists of baskets of small sets of items. Similar items: Sometimes your data looks like a collection of sets and the objective is to find pairs of sets that have a relatively large fraction of their elements in common. It's a fundamental problem of data mining. Summarization The target is to summarize the dataset succinctly and approximately, such as clustering, which is the process of examining a collection of points (data) and grouping the points into clusters according to some measure. The goal is that points in the same cluster have a small distance from one another, while points in different clusters are at a large distance from one another. The data mining process There are two popular processes to define the data mining process in different perspectives, and the more widely adopted one is CRISP-DM: Cross-Industry Standard Process for Data Mining (CRISP-DM) Sample, Explore, Modify, Model, Assess (SEMMA), which was developed by the SAS Institute, USA CRISP-DM There are six phases in this process that are shown in the following figure; it is not rigid, but often has a great deal of backtracking: Let's look at the phases in detail: Business understanding: This task includes determining business objectives, assessing the current situation, establishing data mining goals, and developing a plan. Data understanding: This task evaluates data requirements and includes initial data collection, data description, data exploration, and the verification of data quality. Data preparation: Once available, data resources are identified in the last step. Then, the data needs to be selected, cleaned, and then built into the desired form and format. Modeling: Visualization and cluster analysis are useful for initial analysis. The initial association rules can be developed by applying tools such as generalized rule induction. This is a data mining technique to discover knowledge represented as rules to illustrate the data in the view of causal relationship between conditional factors and a given decision/outcome. The models appropriate to the data type can also be applied. Evaluation :The results should be evaluated in the context specified by the business objectives in the first step. This leads to the identification of new needs and in turn reverts to the prior phases in most cases. Deployment: Data mining can be used to both verify previously held hypotheses or for knowledge. SEMMA Here is an overview of the process for SEMMA: Let's look at these processes in detail: Sample: In this step, a portion of a large dataset is extracted Explore: To gain a better understanding of the dataset, unanticipated trends and anomalies are searched in this step Modify: The variables are created, selected, and transformed to focus on the model construction process Model: A variable combination of models is searched to predict a desired outcome Assess: The findings from the data mining process are evaluated by its usefulness and reliability Social network mining As we mentioned before, data mining finds a model on data and the mining of social network finds the model on graph data in which the social network is represented. Social network mining is one application of web data mining; the popular applications are social sciences and bibliometry, PageRank and HITS, shortcomings of the coarse-grained graph model, enhanced models and techniques, evaluation of topic distillation, and measuring and modeling the Web. Social network When it comes to the discussion of social networks, you will think of Facebook, Google+, LinkedIn, and so on. The essential characteristics of a social network are as follows: There is a collection of entities that participate in the network. Typically, these entities are people, but they could be something else entirely. There is at least one relationship between the entities of the network. On Facebook, this relationship is called friends. Sometimes, the relationship is all-or-nothing; two people are either friends or they are not. However, in other examples of social networks, the relationship has a degree. This degree could be discrete, for example, friends, family, acquaintances, or none as in Google+. It could be a real number; an example would be the fraction of the average day that two people spend talking to each other. There is an assumption of nonrandomness or locality. This condition is the hardest to formalize, but the intuition is that relationships tend to cluster. That is, if entity A is related to both B and C, then there is a higher probability than average that B and C are related. Here are some varieties of social networks: Telephone networks: The nodes in this network are phone numbers and represent individuals E-mail networks: The nodes represent e-mail addresses, which represent individuals Collaboration networks: The nodes here represent individuals who published research papers; the edge connecting two nodes represent two individuals who published one or more papers jointly Social networks are modeled as undirected graphs. The entities are the nodes, and an edge connects two nodes if the nodes are related by the relationship that characterizes the network. If there is a degree associated with the relationship, this degree is represented by labeling the edges. Here is an example in which Coleman's High School Friendship Data from the sna R package is used for analysis. The data is from a research on friendship ties between 73 boys in a high school in one chosen academic year; reported ties for all informants are provided for two time points (fall and spring). The dataset's name is coleman, which is an array type in R language. The node denotes a specific student and the line represents the tie between two students. Summary The book has, as showcased in this article, a lot more interesting coverage with regard to data mining and R. Deep diving into the algorithms associated with data mining and efficient methods to implement them using R. Resources for Article: Further resources on this subject: Multiplying Performance with Parallel Computing [article] Supervised learning [article] Using R for Statistics, Research, and Graphics [article]
Read more
  • 0
  • 0
  • 1520
article-image-structural-equation-modeling-and-confirmatory-factor-analysis
Packt
06 Feb 2015
30 min read
Save for later

Structural Equation Modeling and Confirmatory Factor Analysis

Packt
06 Feb 2015
30 min read
In this article by Paul Gerrard and Radia M. Johnson, the authors of Mastering Scientific Computation with R, we'll discuss the fundamental ideas underlying structural equation modeling, which are often overlooked in other books discussing structural equation modeling (SEM) in R, and then delve into how SEM is done in R. We will then discuss two R packages, OpenMx and lavaan. We can directly apply our discussion of the linear algebra underlying SEM using OpenMx. Because of this, we will go over OpenMx first. We will then discuss lavaan, which is probably more user friendly because it sweeps the matrices and linear algebra representations under the rug so that they are invisible unless the user really goes looking for them. Both packages continue to be developed and there will always be some features better supported in one of these packages than in the other. (For more resources related to this topic, see here.) SEM model fitting and estimation methods To ultimately find a good solution, software has to use trial and error to come up with an implied covariance matrix that matches the observed covariance matrix as well as possible. The question is what does "as well as possible" mean? The answer to this is that the software must try to minimize some particular criterion, usually some sort of discrepancy function. Just what that criterion is depends on the estimation method used. The most commonly used estimation methods in SEM include: Ordinary least squares (OLS) also called unweighted least squares Generalized least squares (GLS) Maximum likelihood (ML) There are a number of other estimation methods as well, some of which can be done in R, but here we will stick with describing the most common ones. In general, OLS is the simplest and computationally cheapest estimation method. GLS is computationally more demanding, and ML is computationally more intensive. We will see why this is, as we discuss the details of these estimation methods. Any SEM estimation method seeks to estimate model parameters that recreate the observed covariance matrix as well as possible. To evaluate how closely an implied covariance matrix matches an observed covariance matrix, we need a discrepancy function. If we assume multivariate normality of the observed variables, the following function can be used to assess discrepancy: In the preceding figure, R is the observed covariance matrix, C is the implied covariance matrix, and V is a weight matrix. The tr function refers to the trace function, which sums the elements of the main diagonal. The choice of V varies based on the SEM estimation method: For OLS, V = I For GLS, V = R-1 In the case of an ML estimation, we seek to minimize one of a number of similar criteria to describe ML, as follows: In the preceding figure, n is the number of variables. There are a couple of points worth noting here. GLS estimation inverts the observed correlation matrix, something computationally demanding with large matrices, but something that must only be done once. Alternatively, ML requires inversion of the implied covariance matrix, which changes with each iteration. Thus, each iteration requires the computationally demanding step of matrix inversion. With modern fast computers, this difference may not be noticeable, but with large SEM models, this might start to be quite time-consuming. Assessing SEM model fit The final question in an SEM model is how well the model explains the data. This is answered with the use of SEM measures of fit. Most of these measures are based on a chi-squared distribution. The fit criteria for GLS and ML (as well as a number of other estimation procedures such as asymptotic distribution-free methods) multiplied by N-1 is approximately chi-square distributed. Here, the capital N represents the number of observations in the dataset, as opposed to lower case n, which gives the number of variables. We compute degrees of freedom as the difference between the number of estimated parameters and the number of known covariances (that is, the total number of values in one triangle of an observed covariance matrix). This gives way to the first test statistic for SEM models, a chi-squared significance level comparing our chi-square value to some minimum chi-square threshold to achieve statistical significance. As with conventional chi-square testing, a chi-square value that is higher than some minimal threshold will reject the null hypothesis. Most experimental science features such as rejection supports the hypothesis of the experiment. This is not the case in SEM, where the null hypothesis is that the model fits the data. Thus, a non-significant chi-square is an indicator of model fit, whereas a significant chi-square rejects model fit. A notable limitation of this is that a greater sample size, greater N, will increase the chi-square value and will therefore increase the power to reject model fit. Thus, using conventional chi-squared testing will tend to support models developed in small samples and reject models developed in large samples. The choice an interpretation of fit measures is a contentious one in SEM literature. However, as can be seen, chi-square has limitations. As such, other model fit criteria were developed that do not penalize models that fit in large samples (some may penalize models fit to small samples though). There are over a dozen indices, but the most common fit indices and interpretation information are as follows: Comparative fit index: In this index, a higher value is better. Conventionally, a value of greater than 0.9 was considered an indicator of good model fit, but some might argue that a value of at least 0.95 is needed. This is relatively sample size insensitive. Root mean square error of approximation: A value of under 0.08 (smaller is better) is often considered necessary to achieve model fit. However, this fit measure is quite sample size sensitive, penalizing small sample studies. Tucker-Lewis index (Non-normed fit index): This is interpreted in a similar manner as the comparative fit index. Also, this is not very sample size sensitive. Standardized root mean square residual: In this index, a lower value is better. A value of 0.06 or less is considered needed for model fit. Also, this may penalize small samples. In the next section, we will show you how to actually fit SEM models in R and how to evaluate fit using fit measures. Using OpenMx and matrix specification of an SEM We went through the basic principles of SEM and discussed the basic computational approach by which this can be achieved. SEM remains an active area of research (with an entire journal devoted to it, Structural Equation Modeling), so there are many additional peculiarities, but rather than delving into all of them, we will start by delving into actually fitting an SEM model in R. OpenMx is not in the CRAN repository, but it is easily obtainable from the OpenMx website, by typing the following in R: source('http://openmx.psyc.virginia.edu/getOpenMx.R')" Summarizing the OpenMx approach In this example, we will use OpenMx by specifying matrices as mentioned earlier. To fit an OpenMx model, we need to first specify the model and then tell the software to attempt to fit the model. Model specification involves four components: Specifying the model matrices; this has two parts: Declare starting values for the estimation Declaring which values can be estimated and which are fixed Telling OpenMx the algebraic relationship of the matrices that should produce an implied covariance matrix Giving an instruction for the model fitting criterion Providing a source of data The R commands that correspond to each of these steps are: mxMatrix mxAlgebra mxMLObjective mxData We will then pass the objects created with each of these commands to create an SEM model using mxModel. Explaining an entire example First, to make things simple, we will store the FALSE and TRUE logical values in single letter variables, which will be convenient when we have matrices full of TRUE and FALSE values as follows: F <- FALSE T <- TRUE Specifying the model matrices Specifying matrices is done with the mxMatrix function, which returns an MxMatrix object. (Note that the object starts with a capital "M" while the function starts with a lowercase "m.") Specifying an MxMatrix is much like specifying a regular R matrix, but MxMatrices has some additional components. The most notable difference is that there are actually two different matrices used to create an MxMatrix. The first is a matrix of starting values, and the second is a matrix that tells which starting values are free to be estimated and which are not. If a starting value is not freely estimable, then it is a fixed constant. Since the actual starting values that we choose do not really matter too much in this case, we will just pick one as a starting value for all parameters that we would like to be estimated. Let's take a look at the following example: mx.A <- mxMatrix( type = "Full", nrow=14, ncol=14, #Provide the Starting Values values = c(    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0 ), #Tell R which values are free to be estimated    free = c(    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, T, F ), byrow=TRUE,   #Provide a matrix name that will be used in model fitting name="A", ) We will now apply this same technique to the S matrix. Here, we will create two S matrices, S1 and S2. They differ simply in the starting values that they supply. We will later try to fit an SEM model using one matrix, and then the other to address problems with the first one. The difference is that S1 uses starting variances of 1 in the diagonal, and S2 uses starting variances of 5. Here, we will use the "symm" matrix type, which is a symmetric matrix. We could use the "full" matrix type, but by using "symm", we are saved from typing all of the symmetric values in the upper half of the matrix. Let's take a look at the following matrix: mx.S1 <- mxMatrix("Symm", nrow=14, ncol=14, values = c(    1,    0, 1,    0, 0, 1,    0, 1, 0, 1,    1, 0, 0, 0, 1,    0, 1, 0, 0, 0, 1,    0, 0, 1, 0, 0, 0, 1,    0, 0, 0, 1, 0, 1, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ),      free = c(    T,    F, T,    F, F, T,    F, T, F, T,    T, F, F, F, T,    F, T, F, F, F, T,    F, F, T, F, F, F, T,    F, F, F, T, F, T, F, T,    F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T ), byrow=TRUE, name="S" )   #The alternative, S2 matrix: mx.S2 <- mxMatrix("Symm", nrow=14, ncol=14, values = c(    5,    0, 5,    0, 0, 5,    0, 1, 0, 5,    1, 0, 0, 0, 5,    0, 1, 0, 0, 0, 5,    0, 0, 1, 0, 0, 0, 5,    0, 0, 0, 1, 0, 1, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5 ),         free = c(    T,    F, T,    F, F, T,    F, T, F, T,    T, F, F, F, T,    F, T, F, F, F, T,    F, F, T, F, F, F, T,    F, F, F, T, F, T, F, T,    F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T ), byrow=TRUE, name="S" ) mx.Filter <- mxMatrix("Full", nrow=11, ncol=14, values= c(        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,      0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0    ),    free=FALSE,    name="Filter",    byrow = TRUE ) And finally, we will create our identity and filter matrices the same way, as follows: mx.I <- mxMatrix("Full", nrow=14, ncol=14,    values= c(        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1    ),    free=FALSE,    byrow = TRUE,    name="I" ) Fitting the model Now, it is time to declare the model that we would like to fit using the mxModel command. This part includes steps 2 through step 4 mentioned earlier. Here, we will tell mxModel which matrices to use. We will then use the mxAlgegra command to tell R how the matrices should be combined to reproduce the implied covariance matrix. We will tell R to use ML estimation with the mxMLObjective command, and we will tell it to apply the estimation to a particular matrix algebra, which we named "C". This is simply the right-hand side of the McArdle McDonald equation. Finally, we will tell R where to get the data to use in model fitting using the following code: factorModel.1 <- mxModel("Political Democracy Model", #Model Matrices mx.A, mx.S1, mx.Filter, mx.I, #Model Fitting Instructions mxAlgebra(Filter %*% solve(I-A) %*% S %*% t(solve(I - A)) %*% t(Filter), name="C"),      mxMLObjective("C", dimnames = names(PoliticalDemocracy)),    #Data to fit mxData(cov(PoliticalDemocracy), type="cov", numObs=75) ) Now, let's tell R to fit the model and summarize the results using mxRun, as follows: summary(mxRun(factorModel.1)) Running Political Democracy Model Error in summary(mxRun(factorModel.1)) : error in evaluating the argument 'object' in selecting a method for function 'summary': Error: The job for model 'Political Democracy Model' exited abnormally with the error message: Expected covariance matrix is non-positive-definite. Uh oh! We got an error message telling us that the expected covariance matrix is not positive definite. Our observed covariance matrix is positive definite but the implied covariance matrix (at least at first) is not. This is an effect of the fact that if we multiply our starting value matrices together as specified by the McArdle McDonald equation, we get a starting implied covariance matrix. If we perform an eigenvalue decomposition of this starting implied covariance matrix, then we will find that the last eigenvalue is negative. This means a negative variance does not make much sense, and this is what "not positive definite" refers to. The good news is that this is simply our starting values, so we can fix this if we modify our starting values. In this case, we can choose values of five along the diagonal of the S matrix, and get a positive definite starting implied covariance matrix. We can rerun this using the mx.S2 matrix specified earlier and the software will proceed as follows: #Rerun with a positive definite matrix   factorModel.2 <- mxModel("Political Democracy Model", #Model Matrices mx.A, mx.S2, mx.Filter, mx.I, #Model Fitting Instructions mxAlgebra(Filter %*% solve(I-A) %*% S %*% t(solve(I - A)) %*% t(Filter), name="C"),    mxMLObjective("C", dimnames = names(PoliticalDemocracy)),    #Data to fit mxData(cov(PoliticalDemocracy), type="cov", numObs=75) )   summary(mxRun(factorModel.2)) This should provide a solution. As can be seen from the previous code, the parameters solved in the model are returned as matrix components. Just like we had to figure out how to go from paths to matrices, we now have to figure out how to go from matrices to paths (the reverse problem). In the following screenshot, we show just the first few free parameters: The preceding screenshot tells us that the parameter estimated in the position of the tenth row and twelfth column in the matrix A is 2.18. This corresponds to a path from the twelfth variable in the A matrix ind60, to the 10th variable in the matrix x2. Thus, the path coefficient from ind60 to x2 is 2.18. There are a few other pieces of information here. The first one tells us that the model has not converged but is "Mx status Green." This means that the model was still converging when it stopped running (that is, it did not converge), but an optimal solution was still found and therefore, the results are likely reliable. Model fit information is also provided suggesting a pretty good model fit with CFI of 0.99 and RMSEA of 0.032. This was a fair amount of work, and creating model matrices by hand from path diagrams can be quite tedious. For this reason, SEM fitting programs have generally adopted the ability to fit SEM by declaring paths rather than model matrices. OpenMx has the ability to allow declaration by paths, but applying model matrices has a few advantages. Principally, we get under the hood of SEM fitting. If we step back, we can see that OpenMx actually did very little for us that is specific to SEM. We told OpenMx how we wanted matrices multiplied together and which parameters of the matrix were free to be estimated. Instead of using the RAM specification, we could have passed the matrices of the LISREL or Bentler-Weeks models with the corresponding algebra methods to recreate an implied covariance matrix. This means that if we are trying to come up with our matrix specification, reproduce prior research, or apply a new SEM matrix specification method published in the literature, OpenMx gives us the power to do it. Also, for educators wishing to teach the underlying mathematical ideas of SEM, OpenMx is a very powerful tool. Fitting SEM models using lavaan If we were to describe OpenMx as the SEM equivalent of having a well-stocked pantry and full kitchen to create whatever you want, and you have the time and know how to do it, we might regard lavaan as a large freezer full of prepackaged microwavable dinners. It does not allow quite as much flexibility as OpenMx because it sweeps much of the work that we did by hand in OpenMx under the rug. Lavaan does use an internal matrix representation, but the user never has to see it. It is this sweeping under the rug that makes lavaan generally much easier to use. It is worth adding that the list of prepackaged features that are built into lavaan with minimal additional programming challenge many commercial SEM packages. The lavaan syntax The key to describing lavaan models is the model syntax, as follows: X =~ Y: Y is a manifestation of the latent variable X Y ~ X: Y is regressed on X Y ~~ X: The covariance between Y and X can be estimated Y ~ 1: This estimates the intercept for Y (implicitly requires mean structure) Y | a*t1 + b*t2: Y has two thresholds that is a and b Y ~ a * X: Y is regressed on X with coefficient a Y ~ start(a) * X: Y is regressed on X; the starting value used for estimation is a It may not be evident at first, but this model description language actually makes lavaan quite powerful. Wherever you have seen a or b in the previous examples, a variable or constant can be used in their place. The beauty of this is that multiple parameters can be constrained to be equal simply by assigning a single parameter name to them. Using lavaan, we can fit a factor analysis model to our physical functioning dataset with only a few lines of code: phys.func.data <- read.csv('phys_func.csv')[-1] names(phys.func.data) <- LETTERS[1:20] R has a built-in vector named LETTERS, which contains all of the capital letters of the English alphabet. The lower case vector letters contains the lowercase alphabet. We will then describe our model using the lavaan syntax. Here, we have a model of three latent variables, our factors, and each of them has manifest variables. Let's take a look at the following example: model.definition.1 <- ' #Factors    Cognitive =~ A + Q + R + S    Legs =~ B + C + D + H + I + J + M + N    Arms =~ E + F+ G + K +L + O + P + T    #Correlations Between Factors    Cognitive ~~ Legs    Cognitive ~~ Arms    Legs ~~ Arms ' We then tell lavaan to fit the model as follows: fit.phys.func <- cfa(model.definition.1, data=phys.func.data, ordered= c('A','B', 'C','D', 'E','F','G', 'H','I','J', 'K', 'L','M','N','O','P','Q','R', 'S', 'T')) In the previous code, we add an ordered = argument, which tells lavaan that some variables are ordinal in nature. In response, lavaan estimates polychoric correlations for these variables. Polychoric correlations assume that we binned a continuous variable into discrete categories, and attempts to explicitly model correlations assuming that there is some continuous underlying variable. Part of this requires finding thresholds (placed on an arbitrary scale) between each categorical response. (for example, threshold 1 falls between the response of 1 and 2, and so on). By telling lavaan to treat some variables as categorical, lavaan will also know to use a special estimation method. Lavaan will use diagonally weighted least squares, which does not assume normality and uses the diagonals of the polychoric correlation matrix for weights in the discrepancy function. With five response options, it is questionable as to whether polychoric correlations are truly needed. Some analysts might argue that with many response options, the data can be treated as continuous, but here we use this method to show off lavaan's capabilities. All SEM models in lavaan use the lavaan command. Here, we use the cfa command, which is one of a number of wrapper functions for the lavaan command. Others include sem and growth. These commands differ in the default options passed to the lavaan command. (For full details, see the package documentation.) Summarizing the data, we can see the loadings of each item on the factor as well as the factor intercorrelations. We can also see the thresholds between each category from the polychoric correlations as follows: summary(fit.phys.func) We can also assess things such as model fit using the fitMeasures command, which has most of the popularly used fit measures and even a few obscure ones. Here, we tell lavaan to simply extract three measures of model fit as follows: fitMeasures(fit.phys.func, c('rmsea', 'cfi', 'srmr')) Collectively, these measures suggest adequate model fit. It is worth noting here that the interpretation of fit measures largely comes from studies using maximum likelihood estimation, and there is some debate as to how well these generalize other fitting methods. The lavaan package also has the capability to use other estimators that treat the data as truly continuous in nature. For this, a particular dataset is far from multivariate normal distributed, so an estimator such as ML is appropriate to use. However, if we wanted to do so, the syntax would be as follows: fit.phys.func.ML <- cfa(model.definition.1, data=phys.func.data, estimator = 'ML') Comparing OpenMx to lavaan It can be seen that lavaan has a much simpler syntax that allows to rapidly model basic SEM models. However, we were a bit unfair to OpenMx because we used a path model specification for lavaan and a matrix specification for OpenMx. The truth is that OpenMx is still probably a bit wordier than lavaan, but let's apply a path model specification in each to do a fair head-to-head comparison. We will use the famous Holzinger-Swineford 1939 dataset here from the lavaan package to do our modeling, as follows: hs.dat <- HolzingerSwineford1939 We will create a new dataset with a shorter name so that we don't have to keep typing HozlingerSwineford1939. Explaining an example in lavaan We will learn to fit the Holzinger-Swineford model in this section. We will start by specifying the SEM model using the lavaan model syntax: hs.model.lavaan <- ' visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed   =~ x7 + x8 + x9   visual ~~ textual visual ~~ speed textual ~~ speed '   fit.hs.lavaan <- cfa(hs.model.lavaan, data=hs.dat, std.lv = TRUE) summary(fit.hs.lavaan) Here, we add the std.lv argument to the fit function, which fixes the variance of the latent variables to 1. We do this instead of constraining the first factor loading on each variable to 1. Only the model coefficients are included for ease of viewing in this book. The result is shown in the following model: > summary(fit.hs.lavaan) …                      Estimate Std.err Z-value P(>|z|) Latent variables: visual =~    x1               0.900   0.081   11.127   0.000    x2               0.498   0.077   6.429   0.000    x3              0.656   0.074   8.817   0.000 textual =~    x4               0.990   0.057   17.474   0.000    x5               1.102   0.063   17.576   0.000    x6               0.917   0.054   17.082   0.000 speed =~    x7               0.619   0.070   8.903   0.000    x8               0.731   0.066   11.090   0.000    x9               0.670   0.065   10.305   0.000   Covariances: visual ~~    textual           0.459   0.064   7.189   0.000    speed             0.471   0.073   6.461   0.000 textual ~~    speed             0.283   0.069   4.117   0.000 Let's compare these results with a model fit in OpenMx using the same dataset and SEM model. Explaining an example in OpenMx The OpenMx syntax for path specification is substantially longer and more explicit. Let's take a look at the following model: hs.model.open.mx <- mxModel("Holzinger Swineford", type="RAM",      manifestVars = names(hs.dat)[7:15], latentVars = c('visual', 'textual', 'speed'),    # Create paths from latent to observed variables mxPath(        from = 'visual',        to = c('x1', 'x2', 'x3'),    free = c(TRUE, TRUE, TRUE),    values = 1          ), mxPath(        from = 'textual',        to = c('x4', 'x5', 'x6'),        free = c(TRUE, TRUE, TRUE),        values = 1      ), mxPath(    from = 'speed',    to = c('x7', 'x8', 'x9'),    free = c(TRUE, TRUE, TRUE),    values = 1      ), # Create covariances among latent variables mxPath(    from = 'visual',    to = 'textual',    arrows=2,    free=TRUE      ), mxPath(        from = 'visual',        to = 'speed',        arrows=2,        free=TRUE      ), mxPath(        from = 'textual',        to = 'speed',        arrows=2,        free=TRUE      ), #Create residual variance terms for the latent variables mxPath(    from= c('visual', 'textual', 'speed'),    arrows=2, #Here we are fixing the latent variances to 1 #These two lines are like st.lv = TRUE in lavaan    free=c(FALSE,FALSE,FALSE),    values=1 ), #Create residual variance terms mxPath( from= c('x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9'),    arrows=2, ),    mxData(        observed=cov(hs.dat[,c(7:15)]),        type="cov",        numObs=301    ) )     fit.hs.open.mx <- mxRun(hs.model.open.mx) summary(fit.hs.open.mx) Here are the results of the OpenMx model fit, which look very similar to lavaan's. This gives a long output. For ease of viewing, only the most relevant parts of the output are included in the following model (the last column that R prints giving the standard error of estimates is also not shown here): > summary(fit.hs.open.mx) …   free parameters:                            name matrix     row     col Estimate Std.Error 1   Holzinger Swineford.A[1,10]     A     x1 visual 0.9011177 2   Holzinger Swineford.A[2,10]     A     x2 visual 0.4987688 3   Holzinger Swineford.A[3,10]     A     x3 visual 0.6572487 4   Holzinger Swineford.A[4,11]     A     x4 textual 0.9913408 5   Holzinger Swineford.A[5,11]     A     x5 textual 1.1034381 6   Holzinger Swineford.A[6,11]     A     x6 textual 0.9181265 7   Holzinger Swineford.A[7,12]     A     x7   speed 0.6205055 8   Holzinger Swineford.A[8,12]     A     x8 speed 0.7321655 9   Holzinger Swineford.A[9,12]     A     x9   speed 0.6710954 10   Holzinger Swineford.S[1,1]     S     x1     x1 0.5508846 11   Holzinger Swineford.S[2,2]     S     x2     x2 1.1376195 12   Holzinger Swineford.S[3,3]     S    x3     x3 0.8471385 13   Holzinger Swineford.S[4,4]     S     x4     x4 0.3724102 14   Holzinger Swineford.S[5,5]     S     x5     x5 0.4477426 15   Holzinger Swineford.S[6,6]     S     x6     x6 0.3573899 16   Holzinger Swineford.S[7,7]      S     x7     x7 0.8020562 17   Holzinger Swineford.S[8,8]     S     x8     x8 0.4893230 18   Holzinger Swineford.S[9,9]     S     x9     x9 0.5680182 19 Holzinger Swineford.S[10,11]     S visual textual 0.4585093 20 Holzinger Swineford.S[10,12]     S visual   speed 0.4705348 21 Holzinger Swineford.S[11,12]     S textual   speed 0.2829848 In summary, the results agree quite closely. For example, looking at the coefficient for the path going from the latent variable visual to the observed variable x1, lavaan gives an estimate of 0.900 while OpenMx computes a value of 0.901. Summary The lavaan package is user friendly, pretty powerful, and constantly adding new features. Alternatively, OpenMx has a steeper learning curve but tremendous flexibility in what it can do. Thus, lavaan is a bit like a large freezer full of prepackaged microwavable dinners, whereas OpenMx is like a well-stocked pantry with no prepared foods but a full kitchen that will let you prepare it if you have the time and the know-how. To run a quick analysis, it is tough to beat the simplicity of lavaan, especially given its wide range of capabilities. For large complex models, OpenMx may be a better choice. The methods covered here are useful to analyze statistical relationships when one has all of the data from events that have already occurred. Resources for Article: Further resources on this subject: Creating your first heat map in R [article] Going Viral [article] Introduction to S4 Classes [article]
Read more
  • 0
  • 0
  • 5043

article-image-extending-elasticsearch-scripting
Packt
06 Feb 2015
21 min read
Save for later

Extending ElasticSearch with Scripting

Packt
06 Feb 2015
21 min read
In article by Alberto Paro, the author of ElasticSearch Cookbook Second Edition, we will cover about the following recipes: (For more resources related to this topic, see here.) Installing additional script plugins Managing scripts Sorting data using scripts Computing return fields with scripting Filtering a search via scripting Introduction ElasticSearch has a powerful way of extending its capabilities with custom scripts, which can be written in several programming languages. The most common ones are Groovy, MVEL, JavaScript, and Python. In this article, we will see how it's possible to create custom scoring algorithms, special processed return fields, custom sorting, and complex update operations on records. The scripting concept of ElasticSearch can be seen as an advanced stored procedures system in the NoSQL world; so, for an advanced usage of ElasticSearch, it is very important to master it. Installing additional script plugins ElasticSearch provides native scripting (a Java code compiled in JAR) and Groovy, but a lot of interesting languages are also available, such as JavaScript and Python. In older ElasticSearch releases, prior to version 1.4, the official scripting language was MVEL, but due to the fact that it was not well-maintained by MVEL developers, in addition to the impossibility to sandbox it and prevent security issues, MVEL was replaced with Groovy. Groovy scripting is now provided by default in ElasticSearch. The other scripting languages can be installed as plugins. Getting ready You will need a working ElasticSearch cluster. How to do it... In order to install JavaScript language support for ElasticSearch (1.3.x), perform the following steps: From the command line, simply enter the following command: bin/plugin --install elasticsearch/elasticsearch-lang-javascript/2.3.0 This will print the following result: -> Installing elasticsearch/elasticsearch-lang-javascript/2.3.0... Trying http://download.elasticsearch.org/elasticsearch/elasticsearch-lang-javascript/ elasticsearch-lang-javascript-2.3.0.zip... Downloading ....DONE Installed lang-javascript If the installation is successful, the output will end with Installed; otherwise, an error is returned. To install Python language support for ElasticSearch, just enter the following command: bin/plugin -install elasticsearch/elasticsearch-lang-python/2.3.0 The version number depends on the ElasticSearch version. Take a look at the plugin's web page to choose the correct version. How it works... Language plugins allow you to extend the number of supported languages to be used in scripting. During the ElasticSearch startup, an internal ElasticSearch service called PluginService loads all the installed language plugins. In order to install or upgrade a plugin, you need to restart the node. The ElasticSearch community provides common scripting languages (a list of the supported scripting languages is available on the ElasticSearch site plugin page at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html), and others are available in GitHub repositories (a simple search on GitHub allows you to find them). The following are the most commonly used languages for scripting: Groovy (http://groovy.codehaus.org/): This language is embedded in ElasticSearch by default. It is a simple language that provides scripting functionalities. This is one of the fastest available language extensions. Groovy is a dynamic, object-oriented programming language with features similar to those of Python, Ruby, Perl, and Smalltalk. It also provides support to write a functional code. JavaScript (https://github.com/elasticsearch/elasticsearch-lang-javascript): This is available as an external plugin. The JavaScript implementation is based on Java Rhino (https://developer.mozilla.org/en-US/docs/Rhino) and is really fast. Python (https://github.com/elasticsearch/elasticsearch-lang-python): This is available as an external plugin, based on Jython (http://jython.org). It allows Python to be used as a script engine. Considering several benchmark results, it's slower than other languages. There's more... Groovy is preferred if the script is not too complex; otherwise, a native plugin provides a better environment to implement complex logic and data management. The performance of every language is different; the fastest one is the native Java. In the case of dynamic scripting languages, Groovy is faster, as compared to JavaScript and Python. In order to access document properties in Groovy scripts, the same approach will work as in other scripting languages: doc.score: This stores the document's score. doc['field_name'].value: This extracts the value of the field_name field from the document. If the value is an array or if you want to extract the value as an array, you can use doc['field_name'].values. doc['field_name'].empty: This returns true if the field_name field has no value in the document. doc['field_name'].multivalue: This returns true if the field_name field contains multiple values. If the field contains a geopoint value, additional methods are available, as follows: doc['field_name'].lat: This returns the latitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lats method. doc['field_name'].lon: This returns the longitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lons method. doc['field_name'].distance(lat,lon): This returns the plane distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].distanceInKm(lat,lon) method. doc['field_name'].arcDistance(lat,lon): This returns the arc distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].arcDistanceInKm(lat,lon) method. doc['field_name'].geohashDistance(geohash): This returns the distance, in miles, from a geohash value. If you need to calculate the same distance in kilometers, you should use doc['field_name'] and the geohashDistanceInKm(lat,lon) method. By using these helper methods, it is possible to create advanced scripts in order to boost a document by a distance that can be very handy in developing geolocalized centered applications. Managing scripts Depending on your scripting usage, there are several ways to customize ElasticSearch to use your script extensions. In this recipe, we will see how to provide scripts to ElasticSearch via files, indexes, or inline. Getting ready You will need a working ElasticSearch cluster populated with the populate script (chapter_06/populate_aggregations.sh), available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... To manage scripting, perform the following steps: Dynamic scripting is disabled by default for security reasons; we need to activate it in order to use dynamic scripting languages such as JavaScript or Python. To do this, we need to turn off the disable flag (script.disable_dynamic: false) in the ElasticSearch configuration file (config/elasticseach.yml) and restart the cluster. To increase security, ElasticSearch does not allow you to specify scripts for non-sandbox languages. Scripts can be placed in the scripts directory inside the configuration directory. To provide a script in a file, we'll put a my_script.groovy script in the config/scripts location with the following code content: doc["price"].value * factor If the dynamic script is enabled (as done in the first step), ElasticSearch allows you to store the scripts in a special index, .scripts. To put my_script in the index, execute the following command in the command terminal: curl -XPOST localhost:9200/_scripts/groovy/my_script -d '{ "script":"doc["price"].value * factor" }' The script can be used by simply referencing it in the script_id field; use the following command: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script_id" : "my_script",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,      "params" : {        "factor" : 1.1      },      "order" : "asc"    } } }' How it works... ElasticSearch allows you to load your script in different ways; each one of these methods has their pros and cons. The most secure way to load or import scripts is to provide them as files in the config/scripts directory. This directory is continuously scanned for new files (by default, every 60 seconds). The scripting language is automatically detected by the file extension, and the script name depends on the filename. If the file is put in subdirectories, the directory path becomes part of the filename; for example, if it is config/scripts/mysub1/mysub2/my_script.groovy, the script name will be mysub1_mysub2_my_script. If the script is provided via a filesystem, it can be referenced in the code via the "script": "script_name" parameter. Scripts can also be available in the special .script index. These are the REST end points: To retrieve a script, use the following code: GET http://<server>/_scripts/<language>/<id"> To store a script use the following code: PUT http://<server>/_scripts/<language>/<id> To delete a script use the following code: DELETE http://<server>/_scripts/<language>/<id> The indexed script can be referenced in the code via the "script_id": "id_of_the_script" parameter. The recipes that follow will use inline scripting because it's easier to use it during the development and testing phases. Generally, a good practice is to develop using the inline dynamic scripting in a request, because it's faster to prototype. Once the script is ready and no changes are needed, it can be stored in the index since it is simpler to call and manage. In production, a best practice is to disable dynamic scripting and store the script on the disk (generally, dumping the indexed script to disk). See also The scripting page on the ElasticSearch website at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Sorting data using script ElasticSearch provides scripting support for the sorting functionality. In real world applications, there is often a need to modify the default sort by the match score using an algorithm that depends on the context and some external variables. Some common scenarios are given as follows: Sorting places near a point Sorting by most-read articles Sorting items by custom user logic Sorting items by revenue Getting ready You will need a working ElasticSearch cluster and an index populated with the script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to sort using scripting, perform the following steps: If you want to order your documents by the price field multiplied by a factor parameter (that is, sales tax), the search will be as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "doc["price"].value * factor",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,    "params" : {        "factor" : 1.1      },            "order" : "asc"        }    } }' In this case, we have used a match_all query and a sort script. If everything is correct, the result returned by ElasticSearch should be as shown in the following code: { "took" : 7, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : null,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "161",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.0278578661440021 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "634",      "_score" : null, "_source" : … truncated …,     "sort" : [ 0.08131364254827411 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "465",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.1094966959069832 ]    } ] } } How it works... The sort scripting allows you to define several parameters, as follows: order (default "asc") ("asc" or "desc"): This determines whether the order must be ascending or descending. script: This contains the code to be executed. type: This defines the type to convert the value. params (optional, a JSON object): This defines the parameters that need to be passed. lang (by default, groovy): This defines the scripting language to be used. ignore_unmapped (optional): This ignores unmapped fields in a sort. This flag allows you to avoid errors due to missing fields in shards. Extending the sort with scripting allows the use of a broader approach to score your hits. ElasticSearch scripting permits the use of every code that you want. You can create custom complex algorithms to score your documents. There's more... Groovy provides a lot of built-in functions (mainly taken from Java's Math class) that can be used in scripts, as shown in the following table: Function Description time() The current time in milliseconds sin(a) Returns the trigonometric sine of an angle cos(a) Returns the trigonometric cosine of an angle tan(a) Returns the trigonometric tangent of an angle asin(a) Returns the arc sine of a value acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value toRadians(angdeg) Converts an angle measured in degrees to an approximately equivalent angle measured in radians toDegrees(angrad) Converts an angle measured in radians to an approximately equivalent angle measured in degrees exp(a) Returns Euler's number raised to the power of a value log(a) Returns the natural logarithm (base e) of a value log10(a) Returns the base 10 logarithm of a value sqrt(a) Returns the correctly rounded positive square root of a value cbrt(a) Returns the cube root of a double value IEEEremainder(f1, f2) Computes the remainder operation on two arguments, as prescribed by the IEEE 754 standard ceil(a) Returns the smallest (closest to negative infinity) value that is greater than or equal to the argument and is equal to a mathematical integer floor(a) Returns the largest (closest to positive infinity) value that is less than or equal to the argument and is equal to a mathematical integer rint(a) Returns the value that is closest in value to the argument and is equal to a mathematical integer atan2(y, x) Returns the angle theta from the conversion of rectangular coordinates (x,y_) to polar coordinates (r,_theta) pow(a, b) Returns the value of the first argument raised to the power of the second argument round(a) Returns the closest integer to the argument random() Returns a random double value abs(a) Returns the absolute value of a value max(a, b) Returns the greater of the two values min(a, b) Returns the smaller of the two values ulp(d) Returns the size of the unit in the last place of the argument signum(d) Returns the signum function of the argument sinh(x) Returns the hyperbolic sine of a value cosh(x) Returns the hyperbolic cosine of a value tanh(x) Returns the hyperbolic tangent of a value hypot(x,y) Returns sqrt(x^2+y^2) without an intermediate overflow or underflow acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value If you want to retrieve records in a random order, you can use a script with a random method, as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "Math.random()",      "lang" : "groovy",      "type" : "number",      "params" : {}    } } }' In this example, for every hit, the new sort value is computed by executing the Math.random() scripting function. See also The official ElasticSearch documentation at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Computing return fields with scripting ElasticSearch allows you to define complex expressions that can be used to return a new calculated field value. These special fields are called script_fields, and they can be expressed with a script in every available ElasticSearch scripting language. Getting ready You will need a working ElasticSearch cluster and an index populated with the script (chapter_06/populate_aggregations.sh), which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to compute return fields with scripting, perform the following steps: Return the following script fields: "my_calc_field": This concatenates the text of the "name" and "description" fields "my_calc_field2": This multiplies the "price" value by the "discount" parameter From the command line, execute the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/ _search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "script_fields" : {    "my_calc_field" : {      "script" : "doc["name"].value + " -- " + doc["description"].value"    },    "my_calc_field2" : {      "script" : "doc["price"].value * discount",      "params" : {       "discount" : 0.8      }    } } }' If everything works all right, this is how the result returned by ElasticSearch should be: { "took" : 4, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "4",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "entropic -- accusantium",        "my_calc_field2" : 5.480038242170081      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "frankie -- accusantium",        "my_calc_field2" : 34.79852410178313      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "11",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "johansson -- accusamus",        "my_calc_field2" : 11.824173084636591      }    } ] } } How it works... The scripting fields are similar to executing an SQL function on a field during a select operation. In ElasticSearch, after a search phase is executed and the hits to be returned are calculated, if some fields (standard or script) are defined, they are calculated and returned. The script field, which can be defined with all the supported languages, is processed by passing a value to the source of the document and, if some other parameters are defined in the script (in the discount factor example), they are passed to the script function. The script function is a code snippet; it can contain everything that the language allows you to write, but it must be evaluated to a value (or a list of values). See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting using script recipe to have a reference of the extra built-in functions in Groovy scripts Filtering a search via scripting ElasticSearch scripting allows you to extend the traditional filter with custom scripts. Using scripting to create a custom filter is a convenient way to write scripting rules that are not provided by Lucene or ElasticSearch, and to implement business logic that is not available in the query DSL. Getting ready You will need a working ElasticSearch cluster and an index populated with the (chapter_06/populate_aggregations.sh) script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to filter a search using a script, perform the following steps: Write a search with a filter that filters out a document with the value of age less than the parameter value: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' In this example, all the documents in which the value of age is greater than param1 are qualified to be returned. If everything works correctly, the result returned by ElasticSearch should be as shown here: { "took" : 30, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 237,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0, "_source" :{ … "age": 83, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "23",      "_score" : 1.0, "_source" : { … "age": 87, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "47",      "_score" : 1.0, "_source" : {…. "age": 98, …}    } ] } } How it works... The script filter is a language script that returns a Boolean value (true/false). For every hit, the script is evaluated, and if it returns true, the hit passes the filter. This type of scripting can only be used as Lucene filters, not as queries, because it doesn't affect the search (the exceptions are constant_score and custom_filters_score). These are the scripting fields: script: This contains the code to be executed params: These are optional parameters to be passed to the script lang (defaults to groovy): This defines the language of the script The script code can be any code in your preferred and supported scripting language that returns a Boolean value. There's more... Other languages are used in the same way as Groovy. For the current example, I have chosen a standard comparison that works in several languages. To execute the same script using the JavaScript language, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"javascript",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' For Python, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"python",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting data using script recipe in this article to get a reference of the extra built-in functions in Groovy scripts Summary In this article you have learnt the ways you can use scripting to extend the ElasticSearch functional capabilities using different programming languages. Resources for Article: Further resources on this subject: Indexing the Data [Article] Low-Level Index Control [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 7403

article-image-basic-and-interactive-plots
Packt
06 Feb 2015
19 min read
Save for later

Basic and Interactive Plots

Packt
06 Feb 2015
19 min read
In this article by Atmajitsinh Gohil, author of the book R Data Visualization Cookbook, we will cover the following topics: A simple bar plot A simple line plot Line plot to tell an effective story Merging histograms Making an interactive bubble plot (For more resources related to this topic, see here.) The main motivation behind this article is to introduce the basics of plotting in R and an element of interactivity via the googleVis package. The basic plots are important as many packages developed in R use basic plot arguments and hence understanding them creates a good foundation for new R users. We will start by exploring the scatter plots in R, which are the most basic plots for exploratory data analysis, and then delve into interactive plots. Every section will start with an introduction to basic R plots and we will build interactive plots thereafter. We will utilize the power of R analytics and implement them using the googleVis package to introduce the element of interactivity. The googleVis package is developed by Google and it uses the Google Chart API to create interactive plots. There are a range of plots available with the googleVis package and this provides us with an advantage to plot the same data on various plots and select the one that delivers an effective message. The package undergoes regular updates and releases, and new charts are implemented with every release. The readers should note that there are other alternatives available to create interactive plots in R, but it is not possible to explore all of them and hence I have selected googleVis to display interactive elements in a chart. I have selected these purely based on my experience with interactivity in plots. The other good interactive package is offered by GGobi. A simple bar plot A bar plot can often be confused with histograms. Histograms are used to study the distribution of data whereas bar plots are used to study categorical data. Both the plots may look similar to the naked eye but the main difference is that the width of a bar plot is not of significance, whereas in histograms the width of the bars signifies the frequency of data. In this recipe, I have made use of the infant mortality rate in India. The data is made available by the Government of India. The main objective is to study the basics of a bar plot in R as shown in the following screenshot: How to do it… We start the recipe by importing our data in R using the read.csv() function. R will search for the data under the current directory, and hence we use the setwd() function to set our working directory: setwd("D:/book/scatter_Area/chapter2") data = read.csv("infant.csv", header = TRUE) Once we import the data, we would like to process the data by ordering it. We order the data using the order() function in R. We would like R to order the column Total2011 in a decreasing order: data = data[order(data$Total2011, decreasing = TRUE),] We use the ifelse() function to create a new column. We would utilize this new column to add different colors to bars in our plot. We could also write a loop in R to do this task but we will keep this for later. The ifelse() function is quick and easy. We instruct R to assign yes if values in the column Total2011 are more than 12.2 and no otherwise. The 12.2 value is not randomly chosen but is the average infant mortality rate of India: new = ifelse(data$Total2011>12.2,"yes","no") Next, we would like to join the vector of yes and no to our original dataset. In R, we can join columns using the cbind() function. Rows can be combined using rbind(): data = cbind(data,new) When we initially plot the bar plot, we observe that we need more space at the bottom of the plot. We adjust the margins of a plot in R by passing the mar() argument within the par() function. The mar() function uses four arguments: bottom, left, top, and right spacing: par(mar = c(10,5,5,5)) Next, we generate a bar plot in R using the barplot() function. The abline() function is used to add a horizontal line on the bar plot: barplot(data$Total2011, las = 2, names.arg= data$India,width =0.80, border = NA,ylim=c(0,20), col = "#e34a33", main = "InfantMortality Rate of India in 2011")abline(h = 12.2, lwd =2, col = "white", lty =2) How it works… The order() function uses permutation to rearrange (decreasing or increasing) the rows based on the variable. We would like to plot the bars from highest to lowest, and hence we require to arrange the data. The ifelse() function is used to generate a new column. We would use this column under the There's more… section of this recipe. The first argument under the ifelse() function is the logical test to be performed. The second argument is the value to be assigned if the test is true, and the third argument is the value to be assigned if the logical test fails. The first argument in the barplot() function defines the height of the bars and horiz = TRUE (not used in our code) instructs R to plot the bars horizontally. The default setting in R will plot the bars vertically. The names.arg argument is used to label the bars. We also specify border = NA to remove the borders and las = 2 is specified to apply the direction to our labels. Try replacing the las values with 1,2,3, or 4 and observe how the orientation of our labels change.. The first argument in the abline() function assigns the position where the line is drawn, that is, vertical or horizontal. The lwd, lty, and col arguments are used to define the width, line type, and color of the line. There's more… While plotting a bar plot, it's a good practice to order the data in ascending or descending order. An unordered bar plot does not convey the right message and the plot is hard to read when there are more bars involved. When we observe a plot, we are interested to get the most information out, and ordering the data is the first step toward achieving this objective. We have not specified how we can use the ifelse() and cbind() functions in the plot. If we would like to color the plot with different colors to let the readers know which states have high infant mortality above the country level, we can do this by pasting col = (data$new) in place of col = "#e34a33". See also New York Times has a very interesting implementation of an interactive bar chart and can be accessed at http://www.nytimes.com/interactive/2007/09/28/business/20070930_SAFETY_GRAPHIC.html A simple line plot Line plots are simply lines connecting all the x and y dots. They are very easy to interpret and are widely used to display an upward or downward trend in data. In this recipe, we will use the googleVis package and create an interactive R line plot. We will learn how we can emphasize on certain variables in our data. The following line plot shows fertility rate: Getting ready We will use the googleVis package to generate a line plot. How to do it… In order to construct a line chart, we will install and load the googleVis package in R. We would also import the fertility data using the read.csv() function: install.packages("googleVis") library(googleVis) frt = read.csv("fertility.csv", header = TRUE, sep =",") The fertility data is downloaded from the OECD website. We can construct our line object using the gvisLineChart() function: gvisLineChart(frt, xvar = "Year","yvar=c("Australia","Austria","Belgium","Canada","Chile","OECD34"), options = list( width = 1100, height= 500, backgroundColor = " "#FFFF99",title ="Fertility Rate in OECD countries" , vAxis = "{title : 'Total Fertility " Rate',gridlines:{color:'#DEDECE',count : 4}, ticks : "   [0,1,2,3,4]}", series = "{0:{color:'black', visibleInLegend :false},        1:{color:'BDBD9D', visibleInLegend :false},        2:{color:'BDBD9D', visibleInLegend :false},            3:{color:'BDBD9D', visibleInLegend :false},           4:{color:'BDBD9D', visibleInLegend :false},          34:{color:'3333FF', visibleInLegend :true}}")) We can construct the visualization using the plot() function in R: plot(line) How it works… The first three arguments of the gvisLineChart() function are the data and the name of the columns to be plotted on the x-axis and y-axis. The options argument lists the chart API options to add and modify elements of a chart. For the purpose of this recipe, we will use part of the dataset. Hence, while we assign the series to be plotted under yvar = c(), we will specify the column names that we would like to be plotted in our chart. Note that the series starts at 0, and hence Australia, which is the first column, is in fact series 0 and not 1. For the purpose of this exercise, let's assume that we would like to demonstrate the mean fertility rate among all OECD economies to our audience. We can achieve this using series {} under option = list(). The series argument will allow us to specify or customize a specific series in our dataset. Under the gvisLineChart() function, we instruct the Google Chart API to color OECD series (series 34) and Australia (series 0) with a different color and also make the legend visible only for OECD and not the entire series. It would be best to display all the legends but we use this to show the flexibility that comes with the Google Chart API. Finally, we can use the plot() function to plot the chart in a browser. The following screenshot displays a part of the data. The dim() function gives us a general idea about the dimensions of the fertility data: New York Times Visualization often combines line plots with bar chart and pie charts. Readers should try constructing such visualization. We can use the gvisMerge() function to merge plots. The function allows merging of just two plots and hence the readers would have to use multiple gvisMerge() functions to create a very similar visualization. The same can also be constructed in R but we will lose the interactive element. See also The OECD website provides economic data related to OECD member countries. The data can be freely downloaded from the website http://www.oecd.org/statistics/. New York Times Visualization combines bar charts and line charts and can be accessed at http://www.nytimes.com/imagepages/2009/10/16/business/20091017_CHARTS_GRAPHIC.html. Line plot to tell an effective story In the previous recipe, we learned how to plot a very basic line plot and use some of the options. In this recipe, we will go a step further and make use of specific visual cues such as color and line width for easy interpretation. Line charts are a great tool to visualize time series data. The fertility data is discrete but connecting points over time provides our audience with a direction. The visualization shows the amazing progress countries such as Mexico and Turkey have achieved in reducing their fertility rate. OECD defines fertility rate as Refers to the number of children that would be born per woman, assuming no female mortality at child-bearing ages and the age-specific fertility rates of a specified country and reference period. Line plots have been widely used by New York Times to create very interesting infographics. This recipe is inspired by one of the New York Times visualizations. It is very important to understand that many of the infographics created by professionals are created using D3.js or Processing. We will not go into the detail of the same but it is good to know the working of these softwares and how they can be used to create visualizations. Getting ready We would need to install and load the googleVis package to construct a line chart. How to do it… To generate an interactive plot, we will load the fertility data in R using the read.csv() function. To generate a line chart that plots the entire dataset, we will use the gvisLineChart() function: line = gvisLineChart(frt, xvar = "Year", yvar=c("Australia",""Austria","Belgium","Canada","Chile","Czech.Republic", "Denmark","Estonia","Finland","France","Germany","Greece","Hungary"", "Iceland","Ireland","Israel","Italy","Japan","Korea","Luxembourg",""Mexico", "Netherlands","New.Zealand","Norway","Poland","Portugal","Slovakia"","Slovenia", "Spain","Sweden","Switzerland","Turkey","United.Kingdom","United."States","OECD34"), options = list( width = 1200, backgroundColor = "#ADAD85",title " ="Fertility Rate in OECD countries" , vAxis = "{gridlines:{color:'#DEDECE',count : 3}, ticks : " [0,1,2,3,4]}", series = "{0:{color:'BDBD9D', visibleInLegend :false}, 20:{color:'009933', visibleInLegend :true}, 31:{color:'996600', visibleInLegend :true}, 34:{color:'3333FF', visibleInLegend :true}}")) To display our visualization in a new browser, we use the generic R plot() function: plot(line) How it works… The arguments passed in the gvisLineChart() function, are exactly the same as discussed under the simple line plot with some minor changes. We would like to plot the entire data for this exercise, and hence we have to state all the column names in yvar =c(). Also, we would like to color all the series with the same color but highlight Mexico, Turkey, and OECD average. We have achieved this in the previous code using series {}, and further specify and customize colors and legend visibility for specific countries. In this particular plot, we have made use of the same color for all the economies but have highlighted Mexico and Turkey to signify the development and growth that took place in the 5-year period. It would also be effective if our audience could compare the OECD average with Mexico and Turkey. This provides the audience with a benchmark they can compare with. If we plot all the legends, it may make the plot too crowded and 34 legends may not make a very attractive plot. We could avoid this by only making specific legends visible. See also D3 is a great tool to develop interactive visualization and this can be accessed at http://d3js.org/. Processing is an open source software developed by MIT and can be downloaded from https://processing.org/. A good resource to pick colors and use them in our plots is the following link: http://www.w3schools.com/tags/ref_colorpicker.asp. I have used New York Times infographics as an inspiration for this plot. You can find a collection of visualization put out by New York Times in 2011 by going to this link, http://www.smallmeans.com/new-york-times-infographics/. Merging histograms Histograms help in studying the underlying distribution. It is more useful when we are trying to compare more than one histogram on the same plot; this provides us with greater insight into the skewness and the overall distribution. In this recipe, we will study how to plot a histogram using the googleVis package and how we merge more than one histogram on the same page. We will only merge two plots but we can merge more plots and try to adjust the width of each plot. This makes it easier to compare all the plots on the same page. The following plot shows two merged histograms: How to do it… In order to generate a histogram, we will install the googleVis package as well as load the same in R: install.packages("googleVis") library(googleVis) We have downloaded the prices of two different stocks and have calculated their daily returns over the entire period. We can load the data in R using the read.csv() function. Our main aim in this recipe is to plot two different histograms and plot them side by side in a browser. Hence, we require to divide our data in three different data frames. For the purpose of this recipe, we will plot the aapl and msft data frames: stk = read.csv("stock_cor.csv", header = TRUE, sep = ",") aapl = data.frame(stk$AAPL) msft = data.frame(stk$MSFT) googl = data.frame(stk$GOOGL) To generate the histograms, we implement the gvisHistogram() function: al = gvisHistogram(aapl, options = list(histogram = "{bucketSize " :1}",legend = "none",title ='Distribution of AAPL Returns', "   width = 500,hAxis = "{showTextEvery: 5,title: "     'Returns'}",vAxis = "{gridlines : {count:4}, title : "       'Frequency'}")) mft = gvisHistogram(msft, options = list(histogram = "{bucketSize " :1}",legend = "none",title ='Distribution of MSFT Returns', "   width = 500,hAxis = "{showTextEvery: 5,title: 'Returns'}","     vAxis = "{gridlines : {count:4}, title : 'Frequency'}")) We combine the two gvis objects in one browser using the gvisMerge() function: mrg = gvisMerge(al,mft, horizontal = TRUE) plot(mrg) How it works… The data.frame() function is used to construct a data frame in R. We require this step as we do not want to plot all the three histograms on the same plot. Note the use of the $ notation in the data.frame() function. The first argument in the gvisHistogram() function is our data stored as a data frame. We can display individual histograms using the plot(al) and plot(mft) functions. But in this recipe, we will plot the final output. We observe that most of the attributes of a histogram function are the same as discussed in previous recipes. The histogram functionality will use an algorithm to create buckets, but we can control this using the bucketSize as histogram = "{bucketSize :1}". Try using different bucket sizes and observe how the buckets in the histograms change. More options related to histograms can also be found in the following link under the Controlling Buckets section: https://developers.google.com/chart/interactive/docs/gallery/histogram#Buckets We have utilized showTextEvery, which is also very specific to histograms. This option allows us to specify how many horizontal axis labels we would like to show. We have used 5 to make the histogram more compact. Our main objective is to observe the distribution and the plot serves our purpose. Finally, we will implement plot() to plot the chart in our favorite browser. We do the same steps to plot the return distribution of Microsoft (MSFT). Now, we would like to place both the plots side by side and view the differences in the distribution. We will use the gvisMerge() function to generate histograms side by side. In our recipe, we have two plots for AAPL and MSFT. The default setting plots each chart vertically but we can specify horizontal = true to plot charts horizontally. Making an interactive bubble plot My first encounter with a bubble plot was while watching a TED video of Hans Roslling. The video led me to search for creating bubble plots in R; a very good introduction to this is available on the Flowing Data website. The advantage of a bubble plot is that it allows us to visualize a third variable, which in our case would be the size of the bubble. In this recipe, I have made use of the googleVis package to plot a bubble plot but you can also implement this in R. The advantage of the Google Chart API is the interactivity and the ease with which they can be attached to a web page. Also note that we could also use squares instead of circles, but this is not implemented in the Google Chart API yet. In order to implement a bubble plot, I have downloaded the crime dataset by state. The details regarding the link and definition of crime data are available in the crime.txt file and are shown in the following screenshot: How to do it… As with all the plots in this article, we will install and load the googleVis Package. We will also import our data file in R using the read.csv() function: crm = read.csv("crimeusa.csv", header = TRUE, sep =",") We can construct our bubble chart using the gvisBubbleChart() function in R: bub1 = gvisBubbleChart(crm,idvar = "States",xvar= "Robbery", yvar="Burglary", sizevar ="Population", colorvar = "Year",options = list(legend = "none",width = 900, height = 600,title=" Crime per State in 2012", sizeAxis ="{maxSize : 40, minSize:0.5}",vAxis = "{title : 'Burglary'}",hAxis= "{title :'Robbery'}"))bub2 = gvisBubbleChart(crm,idvar = "States",xvar= "Robbery", yvar="Burglary",sizevar ="Population",options = list(legend = "none",width = 900, height = 600,title=" Crime per State in 2012", sizeAxis ="{maxSize : 40, minSize:0.5}",vAxis = "{title : 'Burglary'}",hAxis= "{title :'Robbery'}"))ata How it works… The gvisBubbleChart() function uses six attributes to create a bubble chart, which are as follows: data: This is the data defined as a data frame, in our example, crm idvar: This is the vector that is used to assign IDs to the bubbles, in our example, states xvar: This is the column in the data to plot on the x-axis, in our example, Robbery yvar: This is the column in the data to plot on the y-axis, in our example, Burglary sizevar: This is the column used to define the size of the bubble colorvar: This is the column used to define the color We can define the minimum and maximum sizes of each bubble using minSize and maxSize, respectively, under options(). Note that we have used gvisMerge to portray the differences among the bubble plots. In the plot on the right, we have not made use of colorvar and hence all the bubbles are of the same size. There's more… The Google Chart API makes it easier for us to plot a bubble, but the same can be achieved using the R basic plot function. We can make use of the symbols to create a plot. The symbols need not be a bubble; it can be a square as well. By this time, you should have watched Hans' TED lecture and would be wondering how you could create a motion chart with bubbles floating around. The Google Charts API has the ability to create motion charts and the readers can definitely use the googleVis reference manual to learn about this. See also TED video by Hans Rosling can be accessed at http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen The Flowing Data website generates bubble charts using the basic R plot function and can be accessed at http://flowingdata.com/2010/11/23/how-to-make-bubble-charts/ Animated Bubble Chart by New York Times can be accessed at http://2010games.nytimes.com/medals/map.html Summary This article introduces some of the basic R plots, such as line and bar charts. It also discusses the basic elements of interactive plots using the googleVis package in R. This article is a great resource for understanding the basic R plotting techniques. Resources for Article: Further resources on this subject: Using R for Statistics, Research, and Graphics [article] Data visualization [article] Visualization as a Tool to Understand Data [article]
Read more
  • 0
  • 0
  • 1308
article-image-threejs-materials-and-texture
Packt
06 Feb 2015
11 min read
Save for later

Three.js - Materials and Texture

Packt
06 Feb 2015
11 min read
In this article by Jos Dirksen author of the book Three.js Cookbook, we will learn how Three.js offers a large number of different materials and supports many different types of textures. These textures provide a great way to create interesting effects and graphics. In this article, we'll show you recipes that allow you to get the most out of these components provided by Three.js. (For more resources related to this topic, see here.) Using HTML canvas as a texture Most often when you use textures, you use static images. With Three.js, however, it is also possible to create interactive textures. In this recipe, we will show you how you can use an HTML5 canvas element as an input for your texture. Any change to this canvas is automatically reflected after you inform Three.js about this change in the texture used on the geometry. Getting ready For this recipe, we need an HTML5 canvas element that can be displayed as a texture. We can create one ourselves and add some output, but for this recipe, we've chosen something else. We will use a simple JavaScript library, which outputs a clock to a canvas element. The resulting mesh will look like this (see the 04.03-use-html-canvas-as-texture.html example): The JavaScript used to render the clock was based on the code from this site: http://saturnboy.com/2013/10/html5-canvas-clock/. To include the code that renders the clock in our page, we need to add the following to the head element: <script src="../libs/clock.js"></script> How to do it... To use a canvas as a texture, we need to perform a couple of steps: The first thing we need to do is create the canvas element: var canvas = document.createElement('canvas'); canvas.width=512; canvas.height=512; Here, we create an HTML canvas element programmatically and define a fixed width. Now that we've got a canvas, we need to render the clock that we use as the input for this recipe on it. The library is very easy to use; all you have to do is pass in the canvas element we just created: clock(canvas); At this point, we've got a canvas that renders and updates an image of a clock. What we need to do now is create a geometry and a material and use this canvas element as a texture for this material: var cubeGeometry = new THREE.BoxGeometry(10, 10, 10); var cubeMaterial = new THREE.MeshLambertMaterial(); cubeMaterial.map = new THREE.Texture(canvas); var cube = new THREE.Mesh(cubeGeometry, cubeMaterial); To create a texture from a canvas element, all we need to do is create a new instance of THREE.Texture and pass in the canvas element we created in step 1. We assign this texture to the cubeMaterial.map property, and that's it. If you run the recipe at this step, you might see the clock rendered on the sides of the cubes. However, the clock won't update itself. We need to tell Three.js that the canvas element has been changed. We do this by adding the following to the rendering loop: cubeMaterial.map.needsUpdate = true; This informs Three.js that our canvas texture has changed and needs to be updated the next time the scene is rendered. With these four simple steps, you can easily create interactive textures and use everything you can create on a canvas element as a texture in Three.js. How it works... How this works is actually pretty simple. Three.js uses WebGL to render scenes and apply textures. WebGL has native support for using HTML canvas element as textures, so Three.js just passes on the provided canvas element to WebGL and it is processed as any other texture. Making part of an object transparent You can create a lot of interesting visualizations using the various materials available with Three.js. In this recipe, we'll look at how you can use the materials available with Three.js to make part of an object transparent. This will allow you to create complex-looking geometries with relative ease. Getting ready Before we dive into the required steps in Three.js, we first need to have the texture that we will use to make an object partially transparent. For this recipe, we will use the following texture, which was created in Photoshop: You don't have to use Photoshop; the only thing you need to keep in mind is that you use an image with a transparent background. Using this texture, in this recipe, we'll show you how you can create the following (04.08-make-part-of-object-transparent.html): As you can see in the preceeding, only part of the sphere is visible, and you can look through the sphere to see the back at the other side of the sphere. How to do it... Let's look at the steps you need to take to accomplish this: The first thing we do is create the geometry. For this recipe, we use THREE.SphereGeometry: var sphereGeometry = new THREE.SphereGeometry(6, 20, 20); Just like all the other recipes, you can use whatever geometry you want. In the second step, we create the material: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.side = THREE.DoubleSide; mat.depthWrite = false; mat.color = new THREE.Color(0xff0000); As you can see in this fragment, we create THREE.MeshPhongMaterial and load the texture we saw in the Getting ready section of this recipe. To render this correctly, we also need to set the side property to THREE.DoubleSide so that the inside of the sphere is also rendered, and we need to set the depthWrite property to false. This will tell WebGL that we still want to test our vertices against the WebGL depth buffer, but we don't write to it. Often, you need to set this to false when working with more complex transparent objects or particles. Finally, add the sphere to the scene: var sphere = new THREE.Mesh(sphereGeometry, mat); scene.add(sphere); With these simple steps, you can create really interesting effects by just experimenting with textures and geometries. There's more With Three.js, it is possible to repeat textures (refer to the Setup repeating textures recipe). You can use this to create interesting-looking objects such as this: The code required to set a texture to repeat is the following: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.map.wrapS = mat.map.wrapT = THREE.RepeatWrapping; mat.map.repeat.set( 4, 4 ); mat.depthWrite = false; mat.color = new THREE.Color(0x00ff00); By changing the mat.map.repeat.set values, you define how often the texture is repeated. Using a cubemap to create reflective materials With the approach Three.js uses to render scenes in real time, it is difficult and very computationally intensive to create reflective materials. Three.js, however, provides a way you can cheat and approximate reflectivity. For this, Three.js uses cubemaps. In this recipe, we'll explain how to create cubemaps and use them to create reflective materials. Getting ready A cubemap is a set of six images that can be mapped to the inside of a cube. They can be created from a panorama picture and look something like this: In Three.js, we map such a map on the inside of a cube or sphere and use that information to calculate reflections. The following screenshot (example 04.10-use-reflections.html) shows what this looks like when rendered in Three.js: As you can see in the preceeding screenshot, the objects in the center of the scene reflect the environment they are in. This is something often called a skybox. To get ready, the first thing we need to do is get a cubemap. If you search on the Internet, you can find some ready-to-use cubemaps, but it is also very easy to create one yourself. For this, go to http://gonchar.me/panorama/. On this page, you can upload a panoramic picture and it will be converted to a set of pictures you can use as a cubemap. For this, perform the following steps: First, get a 360 degrees panoramic picture. Once you have one, upload it to the http://gonchar.me/panorama/ website by clicking on the large OPEN button:  Once uploaded, the tool will convert the panorama picture to a cubemap as shown in the following screenshot:  When the conversion is done, you can download the various cube map sites. The recipe in this book uses the naming convention provided by Cube map sides option, so download them. You'll end up with six images with names such as right.png, left.png, top.png, bottom.png, front.png, and back.png. Once you've got the sides of the cubemap, you're ready to perform the steps in the recipe. How to do it... To use the cubemap we created in the previous section and create reflecting material,we need to perform a fair number of steps, but it isn't that complex: The first thing you need to do is create an array from the cubemap images you downloaded: var urls = [ '../assets/cubemap/flowers/right.png', '../assets/cubemap/flowers/left.png', '../assets/cubemap/flowers/top.png', '../assets/cubemap/flowers/bottom.png', '../assets/cubemap/flowers/front.png', '../assets/cubemap/flowers/back.png' ]; With this array, we can create a cubemap texture like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls); cubemap.format = THREE.RGBFormat; From this cubemap, we can use THREE.BoxGeometry and a custom THREE.ShaderMaterial object to create a skybox (the environment surrounding our meshes): var shader = THREE.ShaderLib[ "cube" ]; shader.uniforms[ "tCube" ].value = cubemap; var material = new THREE.ShaderMaterial( { fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, depthWrite: false, side: THREE.DoubleSide }); // create the skybox var skybox = new THREE.Mesh( new THREE.BoxGeometry( 10000, 10000, 10000 ), material ); scene.add(skybox); Three.js provides a custom shader (a piece of WebGL code) that we can use for this. As you can see in the code snippet, to use this WebGL code, we need to define a THREE.ShaderMaterial object. With this material, we create a giant THREE.BoxGeometry object that we add to scene. Now that we've created the skybox, we can define the reflecting objects: var sphereGeometry = new THREE.SphereGeometry(4,15,15); var envMaterial = new THREE.MeshBasicMaterial( {envMap:cubemap}); var sphere = new THREE.Mesh(sphereGeometry, envMaterial); As you can see, we also pass in the cubemap we created as a property (envmap) to the material. This informs Three.js that this object is positioned inside a skybox, defined by the images that make up cubemap. The last step is to add the object to the scene, and that's it: scene.add(sphere); In the example in the beginning of this recipe, you saw three geometries. You can use this approach with all different types of geometries. Three.js will determine how to render the reflective area. How it works... Three.js itself doesn't really do that much to render the cubemap object. It relies on a standard functionality provided by WebGL. In WebGL, there is a construct called samplerCube. With samplerCube, you can sample, based on a specific direction, which color matches the cubemap object. Three.js uses this to determine the color value for each part of the geometry. The result is that on each mesh, you can see a reflection of the surrounding cubemap using the WebGL textureCube function. In Three.js, this results in the following call (taken from the WebGL shader in GLSL): vec4 cubeColor = textureCube( tCube, vec3( -vReflect.x, vReflect.yz ) ); A more in-depth explanation on how this works can be found at http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/#cubemap-lookup. There's more... In this recipe, we created the cubemap object by providing six separate images. There is, however, an alternative way to create the cubemap object. If you've got a 360 degrees panoramic image, you can use the following code to directly create a cubemap object from that image: var texture = THREE.ImageUtils.loadTexture( 360-degrees.png', new THREE.UVMapping()); Normally when you create a cubemap object, you use the code shown in this recipe to map it to a skybox. This usually gives the best results but requires some extra code. You can also use THREE.SphereGeometry to create a skybox like this: var mesh = new THREE.Mesh( new THREE.SphereGeometry( 500, 60, 40 ), new THREE.MeshBasicMaterial( { map: texture })); mesh.scale.x = -1; This applies the texture to a sphere and with mesh.scale, turns this sphere inside out. Besides reflection, you can also use a cubemap object for refraction (think about light bending through water drops or glass objects): All you have to do to make a refractive material is load the cubemap object like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls, new THREE.CubeRefractionMapping()); And define the material in the following way: var envMaterial = new THREE.MeshBasicMaterial({envMap:cubemap}); envMaterial.refractionRatio = 0.95; Summary In this article, we learned about the different textures and materials supported by Three.js Resources for Article:  Further resources on this subject: Creating the maze and animating the cube [article] Working with the Basic Components That Make Up a Three.js Scene [article] Mesh animation [article]
Read more
  • 0
  • 0
  • 17978

article-image-seeing-heartbeat-motion-amplifying-camera
Packt
04 Feb 2015
46 min read
Save for later

Seeing a Heartbeat with a Motion Amplifying Camera

Packt
04 Feb 2015
46 min read
Remove everything that has no relevance to the story. If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off. If it's not going to be fired, it shouldn't be hanging there.                                                                                              —Anton Chekhov King Julian: I don't know why the sacrifice didn't work. The science seemed so solid.                                                                                             —Madagascar: Escape 2 Africa (2008) Despite their strange design and mysterious engineering, Q's gadgets always prove useful and reliable. Bond has such faith in the technology that he never even asks how to charge the batteries. One of the inventive ideas in the Bond franchise is that even a lightly equipped spy should be able to see and photograph concealed objects, anyplace, anytime. Let's consider a timeline of a few relevant gadgets in the movies: 1967 (You Only Live Twice): An X-ray desk scans guests for hidden firearms. 1979 (Moonraker): A cigarette case contains an X-ray imaging system that is used to reveal the tumblers of a safe's combination lock. 1989 (License to Kill): A Polaroid camera takes X-ray photos. Oddly enough, its flash is a visible, red laser. 1995 (GoldenEye): A tea tray contains an X-ray scanner that can photograph documents beneath the tray. 1999 (The World is Not Enough): Bond wears a stylish pair of blue-lensed glasses that can see through one layer of clothing to reveal concealed weapons. According to the James Bond Encyclopedia (2007), which is an official guide to the movies, the glasses display infrared video after applying special processing to it. Despite using infrared, they are commonly called X-ray specs, a misnomer. These gadgets deal with unseen wavelengths of light (or radiation) and are broadly comparable to real-world devices such as airport security scanners and night vision goggles. However, it remains difficult to explain how Bond's equipment is so compact and how it takes such clear pictures in diverse lighting conditions and through diverse materials. Moreover, if Bond's devices are active scanners (meaning they emit X-ray radiation or infrared light), they will be clearly visible to other spies using similar hardware. To take another approach, what if we avoid unseen wavelengths of light but instead focus on unseen frequencies of motion? Many things move in a pattern that is too fast or too slow for us to easily notice. Suppose that a man is standing in one place. If he shifts one leg more than the other, perhaps he is concealing a heavy object, such as a gun, on the side that he shifts more. We also might fail to notice deviations from a pattern. Suppose the same man has been looking straight ahead but suddenly, when he believes no one is looking, his eyes dart to one side. Is he watching someone? We can make motions of a certain frequency more visible by repeating them, like a delayed afterimage or a ghost, with each repetition being more faint (less opaque) than the last. The effect is analogous to an echo or a ripple, and it is achieved using an algorithm called Eulerian video magnification. By applying this technique in this article by Joseph Howse, author of the book OpenCV for Secret Agents, we will build a desktop app that allows us to simultaneously see the present and selected slices of the past. The idea of experiencing multiple images simultaneously is, to me, quite natural because for the first 26 years of my life, I had strabismus—commonly called a lazy eye—that caused double vision. A surgeon corrected my eyesight and gave me depth perception but, in memory of strabismus, I would like to name this application Lazy Eyes. (For more resources related to this topic, see here.) Let's take a closer look—or two or more closer looks—at the fast-paced, moving world that we share with all the other secret agents. Planning the Lazy Eyes app Of all our apps, Lazy Eyes has the simplest user interface. It just shows a live video feed with a special effect that highlights motion. The parameters of the effect are quite complex and, moreover, modifying them at runtime would have a big effect on performance. Thus, we do not provide a user interface to reconfigure the effect, but we do provide many parameters in code to allow a programmer to create many variants of the effect and the app. The following is a screenshot illustrating one configuration of the app. This image shows me eating cake. My hands and face are moving often and we see an effect that looks like light and dark waves rippling around the places where moving edges have been. (The effect is more graceful in a live video than in a screenshot.) For more screenshots and an in-depth discussion of the parameters, refer to the section Configuring and testing the app for various motions, later in this article. Regardless of how it is configured, the app loops through the following actions: Capturing an image. Copying and downsampling the image while applying a blur filter and optionally an edge finding filter. We will downsample using so-called image pyramids, which will be discussed in Compositing two images using image pyramids, later in this article. The purpose of downsampling is to achieve a higher frame rate by reducing the amount of image data used in subsequent operations. The purpose of applying a blur filter and optionally an edge finding filter is to create haloes that are useful in amplifying motion. Storing the downsampled copy in a history of frames, with a timestamp. The history has a fixed capacity and once it is full, the oldest frame is overwritten to make room for the new one. If the history is not yet full, we continue to the next iteration of the loop. Decomposing the history into a list of frequencies describing fluctuations (motion) at each pixel. The decomposition function is called a Fast Fourier Transform. We will discuss this in the Extracting repeating signals from video using the Fast Fourier Transform section, later in this article. Setting all frequencies to zero except a certain chosen range that interests us. In other words, filter out the data on motions that are faster or slower than certain thresholds. Recomposing the filtered frequencies into a series of images that are motion maps. Areas that are still (with respect to our chosen range of frequencies) become dark, and areas that are moving might remain bright. The recomposition function is called an Inverse Fast Fourier Transform (IFFT), and we will discuss it later alongside the FFT. Upsampling the latest motion map (again using image pyramids), intensifying it, and overlaying it additively atop the original camera image. Showing the resulting composite image. There it is—a simple plan that requires a rather nuanced implementation and configuration. Let's prepare ourselves by doing a little background research. Understanding what Eulerian video magnification can do Eulerian video magnification is inspired by a model in fluid mechanics called Eulerian specification of the flow field. Let's consider a moving, fluid body, such as a river. The Eulerian specification describes the river's velocity at a given position and time. The velocity would be fast in the mountains in springtime and slow at the river's mouth in winter. Also, the velocity would be slower at a silt-saturated point at the river's bottom, compared to a point where the river's surface hits a rock and sprays. An alternative to the Eulerian specification is the Lagrangian specification, which describes the position of a given particle at a given time. A given bit of silt might make its way down from the mountains to the river's mouth over a period of many years and then spend eons drifting around a tidal basin. For a more formal description of the Eulerian specification, the Lagrangian specification, and their relationship, refer to this Wikipedia article http://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specification_of_the_flow_field. The Lagrangian specification is analogous to many computer vision tasks, in which we model the motion of a particular object or feature over time. However, the Eulerian specification is analogous to our current task, in which we model any motion occurring in a particular position and a particular window of time. Having modeled a motion from an Eulerian perspective, we can visually exaggerate the motion by overlaying the model's results for a blend of positions and times. Let's set a baseline for our expectations of Eulerian video magnification by studying other people's projects: Michael Rubenstein's webpage at MIT (http://people.csail.mit.edu/mrub/vidmag/) gives an abstract and demo videos of his team's pioneering work on Eulerian video magnification. Bryce Drennan's Eulerian-magnification library (https://github.com/brycedrennan/eulerian-magnification) implements the algorithm using NumPy, SciPy, and OpenCV. This implementation is good inspiration for us, but it is designed to process prerecorded videos and is not sufficiently optimized for real-time input. Now, let's discuss the functions that are building blocks of these projects and ours. Extracting repeating signals from video using the Fast Fourier Transform (FFT) An audio signal is typically visualized as a bar chart or wave. The bar chart or wave is high when the sound is loud and low when it is soft. We recognize that a repetitive sound, such as a metronome's beat, makes repetitive peaks and valleys in the visualization. When audio has multiple channels (being a stereo or surround sound recording), each channel can be considered as a separate signal and can be visualized as a separate bar chart or wave. Similarly, in a video, every channel of every pixel can be considered as a separate signal, rising and falling (becoming brighter and dimmer) over time. Imagine that we use a stationary camera to capture a video of a metronome. Then, certain pixel values rise and fall at a regular interval as they capture the passage of the metronome's needle. If the camera has an attached microphone, its signal values rise and fall at the same interval. Based on either the audio or the video, we can measure the metronome's frequency—its beats per minute (bpm) or its beats per second (Hertz or Hz). Conversely, if we change the metronome's bpm setting, the effect on both the audio and the video is predictable. From this thought experiment, we can learn that a signal—be it audio, video, or any other kind—can be expressed as a function of time and, equivalently, a function of frequency. Consider the following pair of graphs. They express the same signal, first as a function of time and then as a function of frequency. Within the time domain, we see one wide peak and valley (in other words, a tapering effect) spanning many narrow peaks and valleys. Within the frequency domain, we see a low-frequency peak and a high-frequency peak. The transformation from the time domain to the frequency domain is called the Fourier transform. Conversely, the transformation from the frequency domain to the time domain is called the inverse Fourier transform. Within the digital world, signals are discrete, not continuous, and we use the terms discrete Fourier transform (DFT) and inverse discrete Fourier transform (IDFT). There is a variety of efficient algorithms to compute the DFT or IDFT and such an algorithm might be described as a Fast Fourier Transform or an Inverse Fast Fourier Transform. For algorithmic descriptions, refer to the following Wikipedia article: http://en.wikipedia.org/wiki/Fast_Fourier_transform. The result of the Fourier transform (including its discrete variants) is a function that maps a frequency to an amplitude and phase. The amplitude represents the magnitude of the frequency's contribution to the signal. The phase represents a temporal shift; it determines whether the frequency's contribution starts on a high or a low. Typically, amplitude and phase are encoded in a complex number, a+bi, where amplitude=sqrt(a^2+b^2) and phase=atan2(a,b). For an explanation of complex numbers, refer to the following Wikipedia article: http://en.wikipedia.org/wiki/Complex_number. The FFT and IFFT are fundamental to a field of computer science called digital signal processing. Many signal processing applications, including Lazy Eyes, involve taking the signal's FFT, modifying or removing certain frequencies in the FFT result, and then reconstructing the filtered signal in the time domain using the IFFT. For example, this approach enables us to amplify certain frequencies while leaving others unchanged. Now, where do we find this functionality? Choosing and setting up an FFT library Several Python libraries provide FFT and IFFT implementations that can process NumPy arrays (and thus OpenCV images). Here are the five major contenders: NumPy: This provides FFT and IFFT implementations in a module called numpy.fft (for more information, refer to http://docs.scipy.org/doc/numpy/reference/routines.fft.html). The module also offers other signal processing functions to work with the output of the FFT. SciPy: This provides FFT and IFFT implementations in a module called scipy.fftpack (for more information refer to http://docs.scipy.org/doc/scipy/reference/fftpack.html). This SciPy module is closely based on the numpy.fft module, but adds some optional arguments and dynamic optimizations based on the input format. The SciPy module also adds more signal processing functions to work with the output of the FFT. OpenCV: This has implementations of FFT (cv2.dft) and IFT (cv2.idft). An official tutorial provides examples and a comparison to NumPy's FFT implementation at http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html. OpenCV's FFT and IFT interfaces are not directly interoperable with the numpy.fft and scipy.fftpack modules that offer a broader range of signal processing functionality. (The data is formatted very differently.) PyFFTW: This is a Python wrapper (https://hgomersall.github.io/pyFFTW/) around a C library called the Fastest Fourier Transform in the West (FFTW) (for more information, refer to http://www.fftw.org/). FFTW provides multiple implementations of FFT and IFFT. At runtime, it dynamically selects implementations that are well optimized for given input formats, output formats, and system capabilities. Optionally, it takes advantage of multithreading (and its threads might run on multiple CPU cores, as the implementation releases Python's Global Interpreter Lock or GIL). PyFFTW provides optional interfaces matching NumPy's and SciPy's FFT and IFFT functions. These interfaces have a low overhead cost (thanks to good caching options that are provided by PyFFTW) and they help to ensure that PyFFTW is interoperable with a broader range of signal processing functionality as implemented in numpy.fft and scipy.fftpack. Reinka: This is a Python library for GPU-accelerated computations using either PyCUDA (http://mathema.tician.de/software/pycuda/) or PyOpenCL (http://mathema.tician.de/software/pyopencl/) as a backend. Reinka (http://reikna.publicfields.net/en/latest/) provides FFT and IFFT implementations in a module called reikna.fft. Reinka internally uses PyCUDA or PyOpenCL arrays (not NumPy arrays), but it provides interfaces for conversion from NumPy arrays to these GPU arrays and back. The converted NumPy output is compatible with other signal processing functionality as implemented in numpy.fft and scipy.fftpack. However, this compatibility comes at a high overhead cost due to locking, reading, and converting the contents of the GPU memory. NumPy, SciPy, OpenCV, and PyFFTW are open-source libraries under the BSD license. Reinka is an open-source library under the MIT license. I recommend PyFFTW because of its optimizations and its interoperability (at a low overhead cost) with all the other functionality that interests us in NumPy, SciPy, and OpenCV. For a tour of PyFFTW's features, including its NumPy- and SciPy-compatible interfaces, refer to the official tutorial at https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html. Depending on our platform, we can set up PyFFTW in one of the following ways: In Windows, download and run a binary installer from https://pypi.python.org/pypi/pyFFTW. Choose the installer for either a 32-bit Python 2.7 or 64-bit Python 2.7 (depending on whether your Python installation, not necessarily your system, is 64-bit). In Mac with MacPorts, run the following command in Terminal: $ sudo port install py27-pyfftw In Ubuntu 14.10 (Utopic) and its derivatives, including Linux Mint 14.10, run the following command in Terminal: $ sudo apt-get install python-fftw3 In Ubuntu 14.04 and earlier versions (and derivatives thereof), do not use this package, as its version is too old. Instead, use the PyFFTW source bundle, as described in the last bullet of this list. In Debian Jessie, Debian Sid, and their derivatives, run the following command in Terminal: $ sudo apt-get install python-pyfftw In Debian Wheezy and its derivatives, including Raspbian, this package does not exist. Instead, use the PyFFTW source bundle, as described in the next bullet. For any other system, download the PyFFTW source bundle from https://pypi.python.org/pypi/pyFFTW. Decompress it and run the setup.py script inside the decompressed folder. Some old versions of the library are called PyFFTW3. We do not want PyFFTW3. However, on Ubuntu 14.10 and its derivatives, the packages are misnamed such that python-fftw3 is really the most recent packaged version (whereas python-fftw is an older PyFFTW3 version). We have our FFT and IFFT needs covered by FFTW (and if we were cowboys instead of secret agents, we could say, "Cover me!"). For additional signal processing functionality, we will use SciPy. Signal processing is not the only new material that we must learn for Lazy Eyes, so let's now look at other functionality that is provided by OpenCV. Compositing two images using image pyramids Running an FFT on a full-resolution video feed would be slow. Also, the resulting frequencies would reflect localized phenomena at each captured pixel, such that the motion map (the result of filtering the frequencies and then applying the IFFT) might appear noisy and overly sharpened. To address these problems, we want a cheap, blurry downsampling technique. However, we also want the option to enhance edges, which are important to our perception of motion. Our need for a blurry downsampling technique is fulfilled by a Gaussian image pyramid. A Gaussian filter blurs an image by making each output pixel a weighted average of many input pixels in the neighborhood. An image pyramid is a series in which each image is half the width and height of the previous image. The halving of image dimensions is achieved by decimation, meaning that every other pixel is simply omitted. A Gaussian image pyramid is constructed by applying a Gaussian filter before each decimation operation. Our need to enhance edges in downsampled images is fulfilled by a Laplacian image pyramid, which is constructed in the following manner. Suppose we have already constructed a Gaussian image pyramid. We take the image at level i+1 in the Gaussian pyramid, upsample it by duplicating the pixels, and apply a Gaussian filter to it again. We then subtract the result from the image at level i in the Gaussian pyramid to produce the corresponding image at level i of the Laplacian pyramid. Thus, the Laplacian image is the difference between a blurry, downsampled image and an even blurrier image that was downsampled, downsampled again, and upsampled. You might wonder how such an algorithm is a form of edge finding. Consider that edges are areas of local contrast, while non-edges are areas of local uniformity. If we blur a uniform area, it is still uniform—zero difference. If we blur a contrasting area, it becomes more uniform—nonzero difference. Thus, the difference can be used to find edges. The Gaussian and Laplacian image pyramids are described in detail in the journal article downloadable from http://web.mit.edu/persci/people/adelson/pub_pdfs/RCA84.pdf. This article is written by E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden on Pyramid methods in image processing, RCA Engineer, vol. 29, no. 6, November/December 1984. Besides using image pyramids to downsample the FFT's input, we also use them to upsample the most recent frame of the IFFT's output. This upsampling step is necessary to create an overlay that matches the size of the original camera image so that we can composite the two. Like in the construction of the Laplacian pyramid, upsampling consists of duplicating pixels and applying a Gaussian filter. OpenCV implements the relevant downsizing and upsizing functions as cv2.pyrDown and cv2.pyrUp. These functions are useful in compositing two images in general (whether or not signal processing is involved) because they enable us to soften differences while preserving edges. The OpenCV documentation includes a good tutorial on this topic at http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_pyramids/py_pyramids.html. Now, we are armed with the knowledge to implement Lazy Eyes! Implementing the Lazy Eyes app Let's create a new folder for Lazy Eyes and, in this folder, create copies of or links to the ResizeUtils.py and WxUtils.py files from any of our previous Python. Alongside the copies or links, let's create a new file, LazyEyes.py. Edit it and enter the following import statements: import collectionsimport numpyimport cv2import threadingimport timeitimport wx import pyfftw.interfaces.cachefrom pyfftw.interfaces.scipy_fftpack import fftfrom pyfftw.interfaces.scipy_fftpack import ifftfrom scipy.fftpack import fftfreq import ResizeUtilsimport WxUtils Besides the modules that we have used in previous projects, we are now using the standard library's collections module for efficient collections and timeit module for precise timing. Also for the first time, we are using signal processing functionality from PyFFTW and SciPy. Like our other Python applications, Lazy Eyes is implemented as a class that extends wx.Frame. Here are the declarations of the class and its initializer: class LazyEyes(wx.Frame):  def __init__(self, maxHistoryLength=360,   minHz=5.0/6.0, maxHz=1.0,   amplification=32.0, numPyramidLevels=2,   useLaplacianPyramid=True,   useGrayOverlay=True,   numFFTThreads = 4, numIFFTThreads=4,   cameraDeviceID=0, imageSize=(480, 360),   title='Lazy Eyes'): The initializer's arguments affect the app's frame rate and the manner in which motion is amplified. These effects are discussed in detail in the section Configuring and testing the app for various motions, later in this article. The following is just a brief description of the arguments: maxHistoryLength is the number of frames (including the current frame and preceding frames) that are analyzed for motion. minHz and maxHz, respectively, define the slowest and fastest motions that are amplified. amplification is the scale of the visual effect. A higher value means motion is highlighted more brightly. numPyramidLevels is the number of pyramid levels by which frames are downsampled before signal processing is done. Remember that each level corresponds to downsampling by a factor of 2. Our implementation assumes numPyramidLevels>0. If useLaplacianPyramid is True, frames are downsampled using a Laplacian pyramid before the signal processing is done. The implication is that only edge motion is highlighted. Alternatively, if useLaplacianPyramid is False, a Gaussian pyramid is used, and the motion in all areas is highlighted. If useGrayOverlay is True, frames are converted to grayscale before the signal processing is done. The implication is that motion is only highlighted in areas of grayscale contrast. Alternatively, if useGrayOverlay is False, motion is highlighted in areas that have contrast in any color channel. numFFTThreads and numIFFTThreads, respectively, are the numbers of threads used in FFT and IFFT computations. cameraDeviceID and imageSize are our usual capture parameters. The initializer's implementation begins in the same way as our other Python apps. It sets flags to indicate that the app is running and (by default) should be mirrored. It creates the capture object and configures its resolution to match the requested width and height if possible. Failing that, the device's default capture resolution is used. The relevant code is as follows:    self.mirrored = True    self._running = True    self._capture = cv2.VideoCapture(cameraDeviceID)   size = ResizeUtils.cvResizeCapture(       self._capture, imageSize)   w, h = size   self._imageWidth = w   self._imageHeight = h Next, we will determine the shape of the history of frames. It has at least 3 dimensions: a number of frames, a width, and height for each frame. The width and height are downsampled from the capture width and height based on the number of pyramid levels. If we are concerned about the color motion and not just the grayscale motion, the history also has a fourth dimension, consisting of 3 color channels. Here is the code to calculate the history's shape:    self._useGrayOverlay = useGrayOverlay   if useGrayOverlay:     historyShape = (maxHistoryLength,             h >> numPyramidLevels,             w >> numPyramidLevels)   else:     historyShape = (maxHistoryLength,             h >> numPyramidLevels,             w >> numPyramidLevels, 3) Note the use of >>, the right bit shift operator, to reduce the dimensions by a power of 2. The power is equal to the number of pyramid levels. We will store the specified maximum history length. For the frames in the history, we will create a NumPy array of the shape that we just determined. For timestamps of the frames, we will create a deque (double-ended queue), a type of collection that allows us to cheaply add or remove elements from either end:    self._maxHistoryLength = maxHistoryLength   self._history = numpy.empty(historyShape,                 numpy.float32)   self._historyTimestamps = collections.deque() We will store the remaining arguments because later, in each frame, we will pass them to the pyramid functions and signal the processing functions:    self._numPyramidLevels = numPyramidLevels   self._useLaplacianPyramid = useLaplacianPyramid    self._minHz = minHz   self._maxHz = maxHz   self._amplification = amplification    self._numFFTThreads = numFFTThreads   self._numIFFTThreads = numIFFTThreads To ensure meaningful error messages and early termination in the case of invalid arguments, we could add code such as the following for each argument: assert numPyramidLevels > 0,      'numPyramidLevels must be positive.' For brevity, such assertions are omitted from our code samples. We call the following two functions to tell PyFFTW to cache its data structures (notably, its NumPy arrays) for a period of at least 1.0 second from their last use. (The default is 0.1 seconds.) Caching is a critical optimization for the PyFFTW interfaces that we are using, and we will choose a period that is more than long enough to keep the cache alive from frame to frame:    pyfftw.interfaces.cache.enable()   pyfftw.interfaces.cache.set_keepalive_time(1.0) The initializer's implementation ends with code to set up a window, event bindings, a bitmap, layout, and background thread—all familiar tasks from our previous Python projects:    style = wx.CLOSE_BOX | wx.MINIMIZE_BOX |        wx.CAPTION | wx.SYSTEM_MENU |        wx.CLIP_CHILDREN   wx.Frame.__init__(self, None, title=title,             style=style, size=size)    self.Bind(wx.EVT_CLOSE, self._onCloseWindow)    quitCommandID = wx.NewId()   self.Bind(wx.EVT_MENU, self._onQuitCommand,         id=quitCommandID)   acceleratorTable = wx.AcceleratorTable([     (wx.ACCEL_NORMAL, wx.WXK_ESCAPE,       quitCommandID)   ])   self.SetAcceleratorTable(acceleratorTable)    self._staticBitmap = wx.StaticBitmap(self,                       size=size)   self._showImage(None)    rootSizer = wx.BoxSizer(wx.VERTICAL)   rootSizer.Add(self._staticBitmap)   self.SetSizerAndFit(rootSizer)    self._captureThread = threading.Thread(       target=self._runCaptureLoop)   self._captureThread.start() We must modify our usual _onCloseWindow callback to disable PyFFTW's cache. Disabling the cache ensures that resources are freed and that PyFFTW's threads terminate normally. The callback's implementation is given in the following code:  def _onCloseWindow(self, event):   self._running = False   self._captureThread.join()   pyfftw.interfaces.cache.disable()   self.Destroy() The Esc key is bound to our usual _onQuitCommand callback, which just closes the app:  def _onQuitCommand(self, event):   self.Close() The loop running on our background thread is similar to the one in our other Python apps. In each frame, it calls a helper function, _applyEulerianVideoMagnification. Here is the loop's implementation.  def _runCaptureLoop(self):   while self._running:     success, image = self._capture.read()     if image is not None:        self._applyEulerianVideoMagnification(           image)       if (self.mirrored):         image[:] = numpy.fliplr(image)     wx.CallAfter(self._showImage, image) The _applyEulerianVideoMagnification helper function is quite long so we will consider its implementation in several chunks. First, we will create a timestamp for the frame and copy the frame to a format that is more suitable for processing. Specifically, we will use a floating point array with either one gray channel or 3 color channels, depending on the configuration:  def _applyEulerianVideoMagnification(self, image):    timestamp = timeit.default_timer()    if self._useGrayOverlay:     smallImage = cv2.cvtColor(         image, cv2.COLOR_BGR2GRAY).astype(             numpy.float32)   else:     smallImage = image.astype(numpy.float32) Using this copy, we will calculate the appropriate level in the Gaussian or Laplacian pyramid:    # Downsample the image using a pyramid technique.   i = 0   while i < self._numPyramidLevels:     smallImage = cv2.pyrDown(smallImage)     i += 1   if self._useLaplacianPyramid:     smallImage[:] -=        cv2.pyrUp(cv2.pyrDown(smallImage)) For the purposes of the history and signal processing functions, we will refer to this pyramid level as "the image" or "the frame". Next, we will check the number of history frames that have been filled so far. If the history has more than one unfilled frame (meaning the history will still not be full after adding this frame), we will append the new image and timestamp and then return early, such that no signal processing is done until a later frame:    historyLength = len(self._historyTimestamps)    if historyLength < self._maxHistoryLength - 1:      # Append the new image and timestamp to the     # history.     self._history[historyLength] = smallImage     self._historyTimestamps.append(timestamp)      # The history is still not full, so wait.     return If the history is just one frame short of being full (meaning the history will be full after adding this frame), we will append the new image and timestamp using the code given as follows:    if historyLength == self._maxHistoryLength - 1:     # Append the new image and timestamp to the     # history.     self._history[historyLength] = smallImage     self._historyTimestamps.append(timestamp) If the history is already full, we will drop the oldest image and timestamp and append the new image and timestamp using the code given as follows:    else:     # Drop the oldest image and timestamp from the     # history and append the new ones.     self._history[:-1] = self._history[1:]     self._historyTimestamps.popleft()     self._history[-1] = smallImage     self._historyTimestamps.append(timestamp)    # The history is full, so process it. The history of image data is a NumPy array and, as such, we are using the terms "append" and "drop" loosely. NumPy arrays are immutable, meaning they cannot grow or shrink. Moreover, we are not recreating this array because it is large and reallocating it each frame would be expensive. We are just overwriting data within the array by moving the old data leftward and copying the new data in. Based on the timestamps, we will calculate the average time per frame in the history, as seen in the following code:    # Find the average length of time per frame.   startTime = self._historyTimestamps[0]   endTime = self._historyTimestamps[-1]   timeElapsed = endTime - startTime   timePerFrame =        timeElapsed / self._maxHistoryLength   #print 'FPS:', 1.0 / timePerFrame We will proceed with a combination of signal processing functions, collectively called a temporal bandpass filter. This filter blocks (zeros out) some frequencies and allows others to pass (remain unchanged). Our first step in implementing this filter is to run the pyfftw.interfaces.scipy_fftpack.fft function using the history and a number of threads as arguments. Also, with the argument axis=0, we will specify that the history's first axis is its time axis:    # Apply the temporal bandpass filter.   fftResult = fft(self._history, axis=0,           threads=self._numFFTThreads) We will pass the FFT result and the time per frame to the scipy.fftpack.fftfreq function. This function returns an array of midpoint frequencies (Hz in our case) corresponding to the indices in the FFT result. (This array answers the question, "Which frequency is the midpoint of the bin of frequencies represented by index i in the FFT?".) We will find the indices whose midpoint frequencies lie closest (minimum absolute value difference) to our initializer's minHz and maxHz parameters. Then, we will modify the FFT result by setting the data to zero in all ranges that do not represent the frequencies of interest:    frequencies = fftfreq(        self._maxHistoryLength, d=timePerFrame)   lowBound = (numpy.abs(       frequencies - self._minHz)).argmin()   highBound = (numpy.abs(       frequencies - self._maxHz)).argmin()   fftResult[:lowBound] = 0j   fftResult[highBound:-highBound] = 0j   fftResult[-lowBound:] = 0j The FFT result is symmetrical: fftResult[i] and fftResult[-i] pertain to the same bin of frequencies. Thus, we will modify the FFT result symmetrically. Remember, the Fourier transform maps a frequency to a complex number that encodes an amplitude and phase. Thus, while the indices of the FFT result correspond to frequencies, the values contained at those indices are complex numbers. Zero as a complex number is written in Python as 0+0j or 0j. Having thus filtered out the frequencies that do not interest us, we will finish applying the temporal bandpass filter by passing the data to the pyfftw.interfaces.scipy_fftpack.ifft function:    ifftResult = ifft(fftResult, axis=0,           threads=self._numIFFTThreads) From the IFFT result, we will take the most recent frame. It should somewhat resemble the current camera frame, but should be black in areas that do not exhibit recent motion matching our parameters. We will multiply this filtered frame so that the non-black areas become bright. Then, we will upsample it (using a pyramid technique) and add the result to the current camera frame so that areas of motion are lit up. Here is the relevant code, which concludes the _applyEulerianVideoMagnification method:    # Amplify the result and overlay it on the   # original image.   overlay = numpy.real(ifftResult[-1]) *            self._amplification   i = 0   while i < self._numPyramidLevels:     overlay = cv2.pyrUp(overlay)     i += 1   if self._useGrayOverlay:    overlay = cv2.cvtColor(overlay,                 cv2.COLOR_GRAY2BGR   cv2.convertScaleAbs(image + overlay, image) To finish the implementation of the LazyEyes class, we will display the image in the same manner as we have done in our other Python apps. Here is the relevant method:  def _showImage(self, image):   if image is None:     # Provide a black bitmap.     bitmap = wx.EmptyBitmap(self._imageWidth,                 self._imageHeight)   else:     # Convert the image to bitmap format.     bitmap = WXUtils.wxBitmapFromCvImage(image)   # Show the bitmap.   self._staticBitmap.SetBitmap(bitmap) Our module's main function just instantiates and runs the app, as seen in the following code: def main(): app = wx.App() lazyEyes = LazyEyes() lazyEyes.Show() app.MainLoop() if __name__ == '__main__': main() That's all! Run the app and stay quite still while it builds up its history of frames. Until the history is full, the video feed will not show any special effect. At the history's default length of 360 frames, it fills in about 20 seconds on my machine. Once it is full, you should see ripples moving through the video feed in areas of recent motion—or perhaps all areas if the camera is moved or the lighting or exposure is changed. The ripples will gradually settle and disappear in areas of the scene that become still, while new ripples will appear in new areas of motion. Feel free to experiment on your own. Now, let's discuss a few recipes for configuring and testing the parameters of the LazyEyes class. Configuring and testing the app for various motions Currently, our main function initializes the LazyEyes object with the default parameters. By filling in the same parameter values explicitly, we would have this statement:  lazyEyes = LazyEyes(maxHistoryLength=360,           minHz=5.0/6.0, maxHz=1.0,           amplification=32.0,           numPyramidLevels=2,           useLaplacianPyramid=True,           useGrayOverlay=True,           numFFTThreads = 4,           numIFFTThreads=4,           imageSize=(480, 360)) This recipe calls for a capture resolution of 480 x 360 and a signal processing resolution of 120 x 90 (as we are downsampling by 2 pyramid levels or a factor of 4). We are amplifying the motion only at frequencies of 0.833 Hz to 1.0 Hz, only at edges (as we are using the Laplacian pyramid), only in grayscale, and only over a history of 360 frames (about 20 to 40 seconds, depending on the frame rate). Motion is exaggerated by a factor of 32. These settings are suitable for many subtle upper-body movements such as a person's head swaying side to side, shoulders heaving while breathing, nostrils flaring, eyebrows rising and falling, and eyes scanning to and fro. For performance, the FFT and IFFT are each using 4 threads. Here is a screenshot of the app that runs with these default parameters. Moments before taking the screenshot, I smiled like a comic theater mask and then I recomposed my normal expression. Note that my eyebrows and moustache are visible in multiple positions, including their current, low positions and their previous, high positions. For the sake of capturing the motion amplification effect in a still image, this gesture is quite exaggerated. However, in a moving video, we can see the amplification of more subtle movements too. Here is a less extreme example where my eyebrows appear very tall because I raised and lowered them: The parameters interact with each other in complex ways. Consider the following relationships between these parameters: Frame rate is greatly affected by the size of the input data for the FFT and IFFT functions. The size of the input data is determined by maxHistoryLength (where a shorter length provides less input and thus a faster frame rate), numPyramidLevels (where a greater level implies less input), useGrayOverlay (where True means less input), and imageSize (where a smaller size implies less input). Frame rate is also greatly affected by the level of multithreading of the FFT and IFFT functions, as determined by numFFTThreads and numIFFTThreads (a greater number of threads is faster up to some point). Frame rate is slightly affected by useLaplacianPyramid (False implies a faster frame rate), as the Laplacian algorithm requires extra steps beyond the Gaussian image. Frame rate determines the amount of time that maxHistoryLength represents. Frame rate and maxHistoryLength determine how many repetitions of motion (if any) can be captured in the minHz to maxHz range. The number of captured repetitions, together with amplification, determines how greatly a motion or a deviation from the motion will be amplified. The inclusion or exclusion of noise is affected by minHz and maxHz (depending on which frequencies of noise are characteristic of the camera), numPyramidLevels (where more implies a less noisy image), useLaplacianPyramid (where True is less noisy), useGrayOverlay (where True is less noisy), and imageSize (where a smaller size implies a less noisy image). The inclusion or exclusion of motion is affected by numPyramidLevels (where fewer means the amplification is more inclusive of small motions), useLaplacianPyramid (False is more inclusive of motion in non-edge areas), useGrayOverlay (False is more inclusive of motion in areas of color contrast), minHz (where a lower value is more inclusive of slow motion), maxHz (where a higher value is more inclusive of fast motion), and imageSize (where bigger size is more inclusive of small motions). Subjectively, the visual effect is always best when the frame rate is high, noise is excluded, and small motions are included. Again subjectively, other conditions for including or excluding motion (edge versus non-edge, grayscale contrast versus color contrast, and fast versus slow) are application-dependent. Let's try our hand at reconfiguring Lazy Eyes, starting with the numFFTThreads and numIFFTThreads parameters. We want to determine the numbers of threads that maximize Lazy Eyes' frame rate on your machine. The more CPU cores you have, the more threads you can gainfully use. However, experimentation is the best guide to pick a number. To get a log of the frame rate in Lazy Eyes, uncomment the following line in the _applyEulerianVideoMagnification method:    print 'FPS:', 1.0 / timePerFrame Run LazyEyes.py from the command line. Once the history fills up, the history's average FPS will be printed to the command line in every frame. Wait until this average FPS value stabilizes. It might take a minute for the average to adjust to the effect of the FFT and IFFT functions that begin running once the history is full. Take note of the FPS value, close the app, adjust the thread count parameters, and test again. Repeat until you feel that you have enough data to pick good numbers of threads to use on your hardware. By activating additional CPU cores, multithreading can cause your system's temperature to rise. As you experiment, monitor your machine's temperature, fans, and CPU usage statistics. If you become concerned, reduce the number of FFT and IFFT threads. Having a suboptimal frame rate is better than overheating of your machine. Now, experiment with other parameters to see how they affect FPS. The numPyramidLevels, useGrayOverlay, and imageSize parameters should all have a large effect. For subjectively good visual results, try to maintain a frame rate above 10 FPS. A frame rate above 15 FPS is excellent. When you are satisfied that you understand the parameters' effects on frame rate, comment out the following line again because the print statement is itself an expensive operation that can reduce frame rate:    #print 'FPS:', 1.0 / timePerFrame Let's try another recipe. Although our default recipe accentuates motion at edges that have high grayscale contrast, this next recipe accentuates motion in all areas (edge or non-edge) that have high contrast (color or grayscale). By considering 3 color channels instead of one grayscale channel, we are tripling the amount of data being processed by the FFT and IFFT. To offset this change, we will cut each dimension of the capture resolution to two third times its default value, thus reducing the amount of data to 2/3 * 2/3 = 4/9 times the default amount. As a net change, the FFT and IFFT process 3 * 4/9 = 4/3 times the default amount of data, a relatively small change. The following initialization statement shows our new recipe's parameters:  lazyEyes = LazyEyes(useLaplacianPyramid=False,           useGrayOverlay=False,           imageSize=(320, 240)) Note that we are still using the default values for most parameters. If you have found non-default values that work well for numFFTThreads and numIFFTThreads on your machine, enter them as well. Here are the screenshots to show our new recipe's effect. This time, let's look at a non-extreme example first. I was typing on my laptop when this was taken. Note the haloes around my arms, which move a lot when I type, and a slight distortion and discoloration of my left cheek (viewer's left in this mirrored image). My left cheek twitches a little when I think. Apparently, it is a tic—already known to my friends and family but rediscovered by me with the help of computer vision! If you are viewing the color version of this image in the e-book, you should see that the haloes around my arms take a green hue from my shirt and a red hue from the sofa. Similarly, the haloes on my cheek take a magenta hue from my skin and a brown hue from my hair. Now, let's consider a more fanciful example. If we were Jedi instead of secret agents, we might wave a steel ruler in the air and pretend it was a lightsaber. While testing the theory that Lazy Eyes could make the ruler look like a real lightsaber, I took the following screenshot. The image shows two pairs of light and dark lines in two places where I was waving the lightsaber ruler. One of the pairs of lines passes through each of my shoulders. The Light Side (light line) and the Dark Side (dark line) show opposite ends of the ruler's path as I waved it. The lines are especially clear in the color version in the e-book. Finally, the moment for which we have all been waiting is … a recipe for amplifying a heartbeat! If you have a heart rate monitor, start by measuring your heart rate. Mine is approximately 87 bpm as I type these words and listen to inspiring ballads by the Canadian folk singer Stan Rogers. To convert bpm to Hz, divide the bpm value by 60 (the number of seconds per minute), giving (87 / 60) Hz = 1.45 Hz in my case. The most visible effect of a heartbeat is that a person's skin changes color, becoming more red or purple when blood is pumped through an area. Thus, let's modify our second recipe, which is able to amplify color motions in non-edge areas. Choosing a frequency range centered on 1.45 Hz, we have the following initializer:  lazyEyes = LazyEyes(minHz=1.4, maxHz=1.5,           useLaplacianPyramid=False,          useGrayOverlay=False,           imageSize=(320, 240)) Customize minHz and maxHz based on your own heart rate. Also, specify numFFTThreads and numIFFTThreads if non-default values work best for you on your machine. Even amplified, a heartbeat is difficult to show in still images; it is much clearer in the live video while running the app. However, take a look at the following pair of screenshots. My skin in the left-hand side screenshot is more yellow (and lighter) while in the right-hand side screenshot it is more purple (and darker). For comparison, note that there is no change in the cream-colored curtains in the background. Three recipes are a good start—certainly enough to fill an episode of a cooking show on TV. Go observe some other motions in your environment, try to estimate their frequencies, and then configure Lazy Eyes to amplify them. How do they look with grayscale amplification versus color amplification? Edge (Laplacian) versus area (Gaussian)? Different history lengths, pyramid levels, and amplification multipliers? Check the book's support page, http://www.nummist.com/opencv, for additional recipes and feel free to share your own by mailing me at [email protected]. Seeing things in another light Although we began this article by presenting Eulerian video magnification as a useful technique for visible light, it is also applicable to other kinds of light or radiation. For example, a person's blood (in veins and bruises) is more visible when imaged in ultraviolet (UV) or in near infrared (NIR) than in visible light. (Skin is more transparent to UV and NIR). Thus, a UV or NIR video is likely to be a better input if we are trying to magnify a person's pulse. Here are some examples of NIR and UV cameras that might provide useful results, though I have not tested them: The Pi NoIR camera (http://www.raspberrypi.org/products/pi-noir-camera/) is a consumer grade NIR camera with a MIPI interface. Here is a time lapse video showing the Pi NoIR renders outdoor scenes at https://www.youtube.com/watch?v=LLA9KHNvUK8. The camera is designed for Raspberry Pi, and on Raspbian it has V4L-compatible drivers that are directly compatible with OpenCV's VideoCapture class. Some Raspberry Pi clones might have drivers for it too. Unfortunately, Raspberry Pi is too slow to run Eulerian video magnification in real time. However, streaming the Pi NoIR input from Raspberry Pi to a desktop, via Ethernet, might allow for a real-time solution. The Agama V-1325R (http://www.agamazone.com/products_v1325r.html) is a consumer grade NIR camera with a USB interface. It is officially supported on Windows and Mac. Users report that it also works on Linux. It includes four NIR LEDs, which can be turned on and off via the vendor's proprietary software on Windows. Artray offers a series of industrial grade NIR cameras called InGaAs (http://www.artray.us/ingaas.html), as well as a series of industrial grade UV cameras (http://www.artray.us/usb2_uv.html). The cameras have USB interfaces. Windows drivers and an SDK are available from the vendor. A third-party project called OpenCV ARTRAY SDK (for more information refer to https://github.com/eiichiromomma/CVMLAB/wiki/OpenCV-ARTRAY-SDK) aims to provide interoperability with at least OpenCV's C API. Good luck and good hunting in the invisible light! Summary This article has introduced you to the relationship between computer vision and digital signal processing. We have considered a video feed as a collection of many signals—one for each channel value of each pixel—and we have understood that repetitive motions create wave patterns in some of these signals. We have used the Fast Fourier Transform and its inverse to create an alternative video stream that only sees certain frequencies of motion. Finally, we have superimposed this filtered video atop the original to amplify the selected frequencies of motion. There, we summarized Eulerian video magnification in 100 words! Our implementation adapts Eulerian video magnification to real time by running the FFT repeatedly on a sliding window of recently captured frames rather than running it once on an entire prerecorded video. We have considered optimizations such as limiting our signal processing to grayscale, recycling large data structures rather than recreating them, and using several threads. Resources for Article: Further resources on this subject: New functionality in OpenCV 3.0 [Article] Wrapping OpenCV [Article] Camera Calibration [Article]
Read more
  • 0
  • 0
  • 4774