Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-working-amqp
Packt
18 Dec 2013
5 min read
Save for later

Working with AMQP

Packt
18 Dec 2013
5 min read
(for more resources related to this topic, see here.) Broadcasting messages In this example we are seeing how to send the same message to a possibly large number of consumers. This is a typical messaging application, broadcasting to a huge number of clients. For example, when updating the scoreboard in a massive multiplayer game, or when publishing news in a social network application. In this article we are discussing both the producer and consumer implementation. Since it is very typical to have consumers using different technologies and programming languages, we are using Java, Python, and Ruby to show interoperability with AMQP. We are going to appreciate the benefits of having separated exchanges and queues in AMQP. Getting ready To use this article you will need to set up Java, Python and Ruby environments as described. How to do it… To cook this article we are preparing four different codes: The Java publisher The Java consumer The Python consumer The Ruby consumer To prepare a Java publisher: Declare a fanout exchange: channel.exchangeDeclare(myExchange, "fanout"); Send one message to the exchange: channel.basicPublish(myExchange, "", null, jsonmessage.getBytes()); Then to prepare a Java consumer: Declare the same fanout exchange declared by the producer: channel.exchangeDeclare(myExchange, "fanout"); Autocreate a new temporary queue: String queueName = channel.queueDeclare().getQueue(); Bind the queue to the exchange: channel.queueBind(queueName, myExchange, ""); Define a custom, non-blocking consumer. Consume messages invoking channel.basicConsume() The source code of the Python consumer is very similar to the Java consumer, so there is no need to repeat the needed steps. In the Ruby consumer you need to use require "bunny" and then use the URI connection. We are now ready to mix all together, to see the article in action: Start one instance of the Java producer; messages start getting published immediately. Start one or more instances of the Java/Python/Ruby consumer; the consumers receive only the messages sent while they are running. see that the consumer has lost the messages sent while it was down. How it works… Both the producer and the consumers are connected to RabbitMQ with a single connection, but the logical path of the messages is depicted in the following figure: In step 1 we have declared the exchange that we are using. The logic is the same as in the queue declaration: if the specified exchange doesn't exist, create it; otherwise, do nothing. The second argument of exchangeDeclare() is a string, specifying the type of the exchange, fanout in this case. In step 2 the producer sends one message to the exchange. You can just view it along with the other defined exchanges issuing the following command on the RabbitMQ command shell: rabbitmqctl list_exchanges The second argument in the call to channel.basicPublish() is the routing key, which is always ignored when used with a fanout exchange. The third argument, set to null, is the optional message property. The fourth argument is just the message itself. When we started one consumer, it created its own temporary queue (step 9). Using the channel.queueDeclare() empty overload, we are creating a nondurable, exclusive, autodelete queue with an autogenerated name. Launching a couple of consumers and issuing rabbitmqctl list_queues, we can see two queues, one per consumer, with their odd names, along with the persistent myFirstQueue as shown in the following screenshot: In step 5 we have bound the queues to myExchange. It is possible to monitor these bindings too, issuing the following command: rabbitmqctl list_bindings The monitoring is a very important aspect of AMQP; messages are routed by exchanges to the bound queues, and buffered in the queues. Exchanges do not buffer messages; they are just logical elements. The fanout exchange routes messages by just placing a copy of them in each bound queue. So, no bound queues and all the messages are just received by no one consumer. As soon as we close one consumer, we implicitly destroy its private temporary queue (that's why the queues are autodelete; otherwise, these queues would be left behind unused, and the number of queues on the broker would increase indefinitely), and messages are not buffered to it anymore. When we restart the consumer, it will create a new, independent queue and as soon as we bind it to myExchange, messages sent by the publisher will be buffered into this queue and pulled by the consumer itself. There's more… When RabbitMQ is started for the first time, it creates some predefined exchanges. Issuing rabbitmqctl list_exchanges we can observe many existing exchanges, in addition to the one that we have defined in this article: All the amq.* exchanges listed here are already defined by all the AMQP-compliant brokers and can be used instead of defining your own exchanges; they do not need to be declared at all. We could have used amq.fanout in place of myLastnews.fanout_6, and this is a good choice for very simple applications. However, applications generally declare and use their own exchanges. See also With the overload used in the article, the exchange is non-autodelete (won't be deleted as soon as the last client detaches it) and non-durable (won't survive server restarts). You can find more available options and overloads at http://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/. Summary In this article, we are mainly using Java since this language is widely used in enterprise software development, integration, and distribution. RabbitMQ is a perfect fit in this environment. resources for article: further resources on this subject: RabbitMQ Acknowledgements [article] Getting Started with ZeroMQ [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 1478

article-image-hello-opencl
Packt
18 Dec 2013
16 min read
Save for later

Hello OpenCL

Packt
18 Dec 2013
16 min read
(For more resources related to this topic, see here.) The Wikipedia definition says that, Parallel Computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are many Parallel Computing programming standards or API specifications, such as OpenMP, OpenMPI, Pthreads, and so on. This book is all about OpenCL Parallel Programming. In this article, we will start with a discussion on different types of parallel programming. We will first introduce you to OpenCL with different OpenCL components. We will also take a look at the various hardware and software vendors of OpenCL and their OpenCL installation steps. Finally, at the end of the article we will see an OpenCL program example SAXPY in detail and its implementation. Advances in computer architecture All over the 20th century computer architectures have advanced by multiple folds. The trend is continuing in the 21st century and will remain for a long time to come. Some of these trends in architecture follow Moore's Law. "Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years". Many devices in the computer industry are linked to Moore's law, whether they are DSPs, memory devices, or digital cameras. All the hardware advances would be of no use if there weren't any software advances. Algorithms and software applications grow in complexity, as more and more user interaction comes into play. An algorithm can be highly sequential or it may be parallelized, by using any parallel computing framework. Amdahl's Law is used to predict the speedup for an algorithm, which can be obtained given n threads. This speedup is dependent on the value of the amount of strictly serial or non-parallelizable code (B). The time T(n) an algorithm takes to finish when being executed on n thread(s) of execution corresponds to: T(n) = T(1) (B + (1-B)/n) Therefore the theoretical speedup which can be obtained for a given algorithm is given by : Speedup(n) = 1/(B + (1-B)/n) Amdahl's Law has a limitation, that it does not fully exploit the computing power that becomes available as the number of processing core increase. Gustafson's Law takes into account the scaling of the platform by adding more processing elements in the platform. This law assumes that the total amount of work that can be done in parallel, varies linearly with the increase in number of processing elements. Let an algorithm be decomposed into (a+b). The variable a is the serial execution time and variable b is the parallel execution time. Then the corresponding speedup for P parallel elements is given by: (a + P*b) Speedup = (a + P*b) / (a + b) Now defining α as a/(a+b), the sequential execution component, as follows, gives the speedup for P processing elements: Speedup(P) = P – α *(P - 1) Given a problem which can be solved using OpenCL, the same problem can also be solved on a different hardware with different capabilities. Gustafson's law suggests that with more number of computing units, the data set should also increase that is, "fixed work per processor". Whereas Amdahl's law suggests the speedup which can be obtained for the existing data set if more computing units are added, that is, "Fixed work for all processors". Let's take the following example: Let the serial component and parallel component of execution be of one unit each. In Amdahl's Law the strictly serial component of code is B (equals 0.5). For two processors, the speedup T(2) is given by: T(2) = 1 / (0.5 + (1 – 0.5) / 2) = 1.33 Similarly for four and eight processors, the speedup is given by: T(4) = 1.6 and T(8) = 1.77 Adding more processors, for example when n tends to infinity, the speedup obtained at max is only 2. On the other hand in Gustafson's law, Alpha = 1(1+1) = 0.5 (which is also the serial component of code). The speedup for two processors is given by: Speedup(2) = 2 – 0.5(2 - 1) = 1.5 Similarly for four and eight processors, the speedup is given by: Speedup(4) = 2.5 and Speedup(8) = 4.5 The following figure shows the work load scaling factor of Gustafson's law, when compared to Amdahl's law with a constant workload: Comparison of Amdahl's and Gustafson's Law OpenCL is all about parallel programming, and Gustafson's law very well fits into this book as we will be dealing with OpenCL for data parallel applications. Workloads which are data parallel in nature can easily increase the data set and take advantage of the scalable platforms by adding more compute units. For example, more pixels can be computed as more compute units are added. Different parallel programming techniques There are several different forms of parallel computing such as bit-level, instruction level, data, and task parallelism. This book will largely focus on data and task parallelism using heterogeneous devices. We just now coined a term, heterogeneous devices. How do we tackle complex tasks "in parallel" using different types of computer architecture? Why do we need OpenCL when there are many (already defined) open standards for Parallel Computing? To answer this question, let us discuss the pros and cons of different Parallel computing Framework. OpenMP OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It is prevalent only on a multi-core computer platform with a shared memory subsystem. A basic OpenMP example implementation of the OpenMP Parallel directive is as follows: #pragma omp parallel { body; } When you build the preceding code using the OpenMP shared library, libgomp would expand to something similar to the following code: void subfunction (void *data) { use data; body; } setup data; GOMP_parallel_start (subfunction, &data, num_threads); subfunction (&data); GOMP_parallel_end (); void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads) The OpenMP directives make things easy for the developer to modify the existing code to exploit the multicore architecture. OpenMP, though being a great parallel programming tool, does not support parallel execution on heterogeneous devices, and the use of a multicore architecture with shared memory subsystem does not make it cost effective. MPI Message Passing Interface (MPI) has an advantage over OpenMP, that it can run on either the shared or distributed memory architecture. Distributed memory computers are less expensive than large shared memory computers. But it has its own drawback with inherent programming and debugging challenges. One major disadvantage of MPI parallel framework is that the performance is limited by the communication network between the nodes. Supercomputers have a massive number of processors which are interconnected using a high speed network connection or are in computer clusters, where computer processors are in close proximity to each other. In clusters, there is an expensive and dedicated data bus for data transfers across the computers. MPI is extensively used in most of these compute monsters called supercomputers. OpenACC The OpenACC Application Program Interface (API) describes a collection of compiler directives to specify loops and regions of code in standard C, C++, and Fortran to be offloaded from a host CPU to an attached accelerator, providing portability across operating systems, host CPUs, and accelerators. OpenACC is similar to OpenMP in terms of program annotation, but unlike OpenMP which can only be accelerated on CPUs, OpenACC programs can be accelerated on a GPU or on other accelerators also. OpenACC aims to overcome the drawbacks of OpenMP by making parallel programming possible across heterogeneous devices. OpenACC standard describes directives and APIs to accelerate the applications. The ease of programming and the ability to scale the existing codes to use the heterogeneous processor, warrantees a great future for OpenACC programming. CUDA Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by NVIDIA for graphics processing and GPU (General Purpose GPU) programming. There is a fairly good developer community following for the CUDA software framework. Unlike OpenCL, which is supported on GPUs by many vendors and even on many other devices such as IBM's Cell B.E. processor or TI's DSP processor and so on, CUDA is supported only for NVIDIA GPUs. Due to this lack of generalization, and focus on a very specific hardware platform from a single vendor, OpenCL is gaining traction. CUDA or OpenCL? CUDA is more proprietary and vendor specific but has its own advantages. It is easier to learn and start writing code in CUDA than in OpenCL, due to its simplicity. Optimization of CUDA is more deterministic across a platform, since less number of platforms are supported from a single vendor only. It has simplified few programming constructs and mechanisms. So for a quick start and if you are sure that you can stick to one device (GPU) from a single vendor that is NVIDIA, CUDA can be a good choice. OpenCL on the other hand is supported for many hardware from several vendors and those hardware vary extensively even in their basic architecture, which created the requirement of understanding a little complicated concepts before starting OpenCL programming. Also, due to the support of a huge range of hardware, although an OpenCL program is portable, it may lose optimization when ported from one platform to another. The kernel development where most of the effort goes, is practically identical between the two languages. So, one should not worry about which one to choose. Choose the language which is convenient. But remember your OpenCL application will be vendor agnostic. This book aims at attracting more developers to OpenCL. There are many libraries which use OpenCL programming for acceleration. Some of them are MAGMA, clAMDBLAS, clAMDFFT, BOLT C++ Template library, and JACKET which accelerate MATLAB on GPUs. Besides this, there are C++ and Java bindings available for OpenCL also. Once you've figured out how to write your important "kernels" it's trivial to port to either OpenCL or CUDA. A kernel is a computation code which is executed by an array of threads. CUDA also has a vast set of CUDA accelerated libraries, that is, CUBLAS, CUFFT, CUSPARSE, Thrust and so on. But it may not take a long time to port these libraries to OpenCL. Renderscripts Renderscripts is also an API specification which is targeted for 3D rendering and general purpose compute operations in an Android platform. Android apps can accelerate the performance by using these APIs. It is also a cross-platform solution. When an app is run, the scripts are compiled into a machine code of the device. This device can be a CPU, a GPU, or a DSP. The choice of which device to run it on is made at runtime. If a platform does not have a GPU, the code may fall back to the CPU. Only Android supports this API specification as of now. The execution model in Renderscripts is similar to that of OpenCL. Hybrid parallel computing model Parallel programming models have their own advantages and disadvantages. With the advent of many different types of computer architectures, there is a need to use multiple programming models to achieve high performance. For example, one may want to use MPI as the message passing framework, and then at each node level one might want to use, OpenCL, CUDA, OpenMP, or OpenACC. Besides all the above programming models many compilers such as Intel ICC, GCC, and Open64 provide auto parallelization options, which makes the programmers job easy and exploit the underlying hardware architecture without the need of knowing any parallel computing framework. Compilers are known to be good at providing instruction-level parallelism. But tackling data level or task level auto parallelism has its own limitations and complexities. Introduction to OpenCL OpenCL standard was first introduced by Apple, and later on became part of the open standards organization "Khronos Group". It is a non-profit industry consortium, creating open standards for the authoring, and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. The goal of OpenCL is to make certain types of parallel programming easier, and to provide vendor agnostic hardware-accelerated parallel execution of code. OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. It provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems, and handheld devices using a diverse mix of multi-core CPUs, GPUs, and DSPs. OpenCL gives developers a common set of easy-to-use tools to take advantage of any device with an OpenCL driver (processors, graphics cards, and so on) for the processing of parallel code. By creating an efficient, close-to-the-metal programming interface, OpenCL will form the foundation layer of a parallel computing ecosystem of platform-independent tools, middleware, and applications. We mentioned vendor agnostic, yes that is what OpenCL is about. The different vendors here can be AMD, Intel, NVIDIA, ARM, TI, and so on. The following diagram shows the different vendors and hardware architectures which use the OpenCL specification to leverage the hardware capabilities: The heterogeneous system The OpenCL framework defines a language to write "kernels". These kernels are functions which are capable of running on different compute devices. OpenCL defines an extended C language for writing compute kernels, and a set of APIs for creating and managing these kernels. The compute kernels are compiled with a runtime compiler, which compiles them on-the-fly during host application execution for the targeted device. This enables the host application to take advantage of all the compute devices in the system with a single set of portable compute kernels. Based on your interest and hardware availability, you might want to do OpenCL programming with a "host and device" combination of "CPU and CPU" or "CPU and GPU". Both have their own programming strategy. In CPUs you can run very large kernels as the CPU architecture supports out-of-order instruction level parallelism and have large caches. For the GPU you will be better off writing small kernels for better performance. Hardware and software vendors There are various hardware vendors who support OpenCL. Every OpenCL vendor provides OpenCL runtime libraries. These runtimes are capable of running only on their specific hardware architectures. Not only across different vendors, but within a vendor there may be different types of architectures which might need a different approach towards OpenCL programming. Now let's discuss the various hardware vendors who provide an implementation of OpenCL, to exploit their underlying hardware. Advanced Micro Devices, Inc. (AMD) With the launch of AMD A Series APU, one of industry's first Accelerated Processing Unit (APU), AMD is leading the efforts of integrating both the x86_64 CPU and GPU dies in one chip. It has four cores of CPU processing power, and also a four or five graphics SIMD engine, depending on the silicon part which you wish to buy. The following figure shows the block diagram of AMD APU architecture: AMD architecture diagram—© 2011, Advanced Micro Devices, Inc. An AMD GPU consist of a number of Compute Engines (CU) and each CU has 16 ALUs. Further, each ALU is a VLIW4 SIMD processor and it could execute a bundle of four or five independent instructions. Each CU could be issued a group of 64 work items which form the work group (wavefront). AMD Radeon ™ HD 6XXX graphics processors uses this design. The following figure shows the HD 6XXX series Compute unit, which has 16 SIMD engines, each of which has four processing elements: AMD Radeon HD 6xxx Series SIMD Engine—© 2011, Advanced Micro Devices, Inc. Starting with the AMD Radeon HD 7XXX series of graphics processors from AMD, there were significant architectural changes. AMD introduced the new Graphics Core Next (GCN) architecture. The following figure shows an GCN compute unit which has 4 SIMD engines and each engine is 16 lanes wide: GCN Compute Unit—© 2011, Advanced Micro Devices, Inc. A group of these Compute Units forms an AMD HD 7xxx Graphics Processor. In GCN, each CU includes four separate SIMD units for vector processing. Each of these SIMD units simultaneously execute a single operation across 16 work items, but each can be working on a separate wavefront. Apart from the APUs, AMD also provides discrete graphics cards. The latest family of graphics card, HD 7XXX, and beyond uses the GCN architecture. NVIDIA® One of NVIDIA GPU architectures is codenamed "Kepler". GeForce® GTX 680 is one Kepler architectural silicon part. Each Kepler GPU consists of different configurations of Graphics Processing Clusters (GPC) and streaming multiprocessors. The GTX 680 consists of four GPCs and eight SMXs as shown in the following figure: NVIDIA Kepler architecture—GTX 680, © NVIDIA® Kepler architecture is part of the GTX 6XX and GTX 7XX family of NVIDIA discrete cards. Prior to Kepler, NVIDIA had Fermi architecture which was part of the GTX 5XX family of discrete and mobile graphic processing units. Intel® Intel's OpenCL implementation is supported in the Sandy Bridge and Ivy Bridge processor families. Sandy Bridge family architecture is also synonymous with the AMD's APU. These processor architectures also integrated a GPU into the same silicon as the CPU by Intel. Intel changed the design of the L3 cache, and allowed the graphic cores to get access to the L3, which is also called as the last level cache. It is because of this L3 sharing that the graphics performance is good in Intel. Each of the CPUs including the graphics execution unit is connected via Ring Bus. Also each execution unit is a true parallel scalar processor. Sandy Bridge provides the graphics engine HD 2000, with six Execution Units (EU), and HD 3000 (12 EU), and Ivy Bridge provides HD 2500(six EU) and HD 4000 (16 EU). The following figure shows the Sandy bridge architecture with a ring bus, which acts as an interconnect between the cores and the HD graphics: Intel Sandy Bridge architecture—© Intel® ARM Mali™ GPUs ARM also provides GPUs by the name of Mali Graphics processors. The Mali T6XX series of processors come with two, four, or eight graphics cores. These graphic engines deliver graphics compute capability to entry level smartphones, tablets, and Smart TVs. The below diagram shows the Mali T628 graphics processor. ARM Mali—T628 graphics processor, © ARM Mali T628 has eight shader cores or graphic cores. These cores also support Renderscripts APIs besides supporting OpenCL. Besides the four key competitors, companies such as TI (DSP), Altera (FPGA), and Oracle are providing OpenCL implementations for their respective hardware. We suggest you to get hold of the benchmark performance numbers of the different processor architectures we discussed, and try to compare the performance numbers of each of them. This is an important first step towards comparing different architectures, and in the future you might want to select a particular OpenCL platform based on your application workload.
Read more
  • 0
  • 0
  • 1841

article-image-moving-your-application-production
Packt
16 Dec 2013
9 min read
Save for later

Moving Your Application to Production

Packt
16 Dec 2013
9 min read
(For more resources related to this topic, see here.) We will start by examining the Sencha Cmd compiler. Compiling with Sencha Cmd This section will focus on using Sencha Cmd to compile our Ext JS 4 application for deployment within a Web Archive (WAR) file. The goal of the compilation process is to create a single JavaScript file that contains all of the code needed for the application, including all the Ext JS 4 dependencies. The index.html file that was created during the application skeleton generation is structured as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <!-- <x-compile> --> <!-- <x-bootstrap> --> <link rel="stylesheet" href="bootstrap.css"> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> The open and close tags of the x-compile directive enclose the part of the index.html file where the Sencha Cmd compiler will operate. The only declarations that should be contained in this block are the script tags. The compiler will process all of the scripts within the x-compile directive, searching for dependencies based on the Ext.define, requires, or uses directives. An exception to this is the ext-dev.js file. This file is considered to be a "bootstrap" file for the framework and will not be processed in the same way. The compiler ignores the files in the x-bootstrap block and the declarations are removed from the final compiler-generated page. The first step in the compilation process is to examine and parse all the JavaScript source code and analyze any dependencies. To do this the compiler needs to identify all the source folders in the application. Our application has two source folders: Ext JS 4 sources in webapp/ext/src and 3T application sources in webapp/app. These folder locations are specified using the -sdk and -classpath arguments in the compile command: sencha –sdk {path-to-sdk} compile -classpath={app-sources-folder} page -yui -in{index-page-to-compile}-out {output-file-location} For our 3T application the compile command is as follows: sencha –sdk ext compile -classpath=app page -yui -in index.html-out build/index.html This command performs the following actions: The Sencha Cmd compiler examines all the folders specified by the -classpath argument. The -sdk directory is automatically included for scanning. The page command then includes all of the script tags in index.html that are contained in the x-compile block. After identifying the content of the app directory and the index.html page, the compiler analyzes the JavaScript code and determines what is ultimately needed for inclusion in a single JavaScript file representing the application. A modified version of the original index.html file is written to build/index.html. All of the JavaScript files needed by the new index.html file are concatenated and compressed using the YUI Compressor, and written to the build/all-classes.js file. The sencha compile command must be executed from within the webapp directory, which is the root of the application and is the directory containing the index.html file. All the arguments supplied to the sencha compile command can then be relative to the webapp directory. Open a command prompt (or terminal window in Mac) and navigate to the webapp directory of the 3T project. Executing the sencha compile command as shown earlier in this section will result in the following output: Opening the webapp/build folder in NetBeans should now show the two newly generated files: index.html and all-classes.js. The all-classes.js file will contain all the required Ext JS 4 classes in addition to all the 3T application classes. Attempting to open this file in NetBeans will result in the following warning: "The file seems to be too large to open safely...", but you can open the file in a text editor to see the following concatenated and minified content: Opening the build/index.html page in NetBeans will display the following screenshot: You can now open the build/index.html file in the browser after running the application, but the result may surprise you: The layout that is presented will depend on the browser, but regardless, you will see that the CSS styling is missing. The CSS files required by our application need to be moved outside the <!-- <x-compile> --> directive. But where are the styles coming from? It is now time to briefly delve into Ext JS 4 themes and the bootstrap.css file. Ext JS 4 theming Ext JS 4 themes leverage Syntactically Awesome StyleSheets (SASS) and Compass (http://compass-style.org/) to enable the use of variables and mixins in stylesheets. Almost all of the styles for Ext JS 4 components can be customized, including colors, fonts, borders, and backgrounds, by simply changing the SASS variables. SASS is an extension of CSS that allows you to keep large stylesheets well-organized; a very good overview and reference can be found at http://sass-lang.com/documentation/file.SASS_REFERENCE.html. Theming an Ext JS 4 application using Compass and SASS is beyond the scope of this book. Sencha Cmd allows easy integration with these technologies to build SASS projects; however, the SASS language and syntax is a steep learning curve in its own right. Ext JS 4 theming is very powerful and minor changes to the existing themes can quickly change the appearance of your application. You can find out more about Ext JS 4 theming at http://docs.sencha.com/extjs/4.2.2/#!/guide/theming. The bootstrap.css file was created with the default theme definition during the generation of the application skeleton. The content of the bootstrap.css file is as follows: @import 'ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all.css'; This file imports the ext-theme-classic-all.css stylesheet, which is the default "classic" Ext JS theme. All of the available themes can be found in the ext/packages directory of the Ext JS 4 SDK: Changing to a different theme is as simple as changing the bootstrap.css import. Switching to the neptune theme would require the following bootstrap.css definition: @import 'ext/packages/ext-theme-neptune/build/resources/ext-theme-neptune-all.css'; This modification will change the appearance of the application to the Ext JS "neptune" theme as shown in the following screenshot: We will change the bootstrap.css file definition to use the gray theme: @import 'ext /packages/ext-theme-gray/build/resources/ext-theme-gray-all.css'; This will result in the following appearance: You may experiment with different themes but should note that not all of the themes may be as complete as the classic theme; minor changes may be required to fully utilize the styling for some components. We will keep the gray theme for our index.html page. This will allow us to differentiate the (original) index.html page from the new ones that will be created in the following section using the classic theme. Compiling for production use Until now we have only worked with the Sencha Cmd-generated index.html file. We will now create a new index-dev.html file for our development environment. The development file will be a copy of the index.html file without the bootstrap.css file. We will reference the default classic theme in the index-dev.html file as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <link rel="stylesh eet" href="ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all. css"> <link rel="stylesheet" href="resources/styles.css"> <!-- <x-compile> --> <!-- <x-bootstrap> --> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> Note that we have moved the stylesheet definition out of the <!-- <x-compile> --> directive. If you are using the downloaded source code for the book, you will have the resources/styles.css file and the resources directory structure available. The stylesheet and associated images in the resources directory contain the 3T logos and icons. We recommend you download the full source code now for completeness. We can now modify the Sencha Cmd compile command to use the index-dev.html file and output the generated compile file to index-prod.html in the webapp directory: sencha –sdk ext compile -classpath=app page -yui -in index-dev.html-out index-prod.html This command will generate the index-prod.html file and the all-classes.js files in the webapp directory as shown in the following screenshot: The index-prod.html file references the stylesheets directly and uses the single compiled and minified all-classes.js file. You can now run the application and browse the index-prod.html file as shown in the following screenshot: You should notice a significant increase in the speed with which the logon window is displayed as all the JavaScript classes are loaded from the single all-classes.js file. The index-prod.html file will be used by developers to test the compiled all-classes.js file. Accessing the individual pages will now allow us to differentiate between environments: The logon window as displayed in the browser Page description The index.html page was generated by Sencha Cmd and has been configured to use the gray theme in bootstrap.css. This page is no longer needed for development; use index-dev.html instead. You can access this page at http://localhost:8080/index.html The index-dev.html page uses the classic theme stylesheet included outside the <!-- <x-compile> --> directive. Use this file for application development. Ext JS 4 will dynamically load source files as required. You can access this page at http://localhost:8080/index-dev.html The index-prod.html file is dynamically generated by the Sencha Cmd compile command. This page uses the all-classes.js all-in-one compiled JavaScript file with the classic theme stylesheet. You can access this page at http://localhost:8080/index-prod.html
Read more
  • 0
  • 0
  • 3296
Banner background image

article-image-parallel-programming-patterns
Packt
25 Nov 2013
22 min read
Save for later

Parallel Programming Patterns

Packt
25 Nov 2013
22 min read
(For more resources related to this topic, see here.) Patterns in programming mean a concrete and standard solution to a given problem. Usually, programming patterns are the result of people gathering experience, analyzing the common problems, and providing solutions to these problems. Since parallel programming has existed for quite a long time, there are many different patterns for programming parallel applications. There are even special programming languages to make programming of specific parallel algorithms easier. However, this is where things start to become increasingly complicated. In this article, I will provide a starting point from where you will be able to study parallel programming further. We will review very basic, yet very useful, patterns that are quite helpful for many common situations in parallel programming. First is about using a shared-state object from multiple threads. I would like to emphasize that you should avoid it as much as possible. A shared state is really bad when you write parallel algorithms, but in many occasions it is inevitable. We will find out how to delay an actual computation of an object until it is needed, and how to implement different scenarios to achieve thread safety. The next two recipes will show how to create a structured parallel data flow. We will review a concrete case of a producer/consumer pattern, which is called as Parallel Pipeline. We are going to implement it by just blocking the collection first, and then see how helpful is another library from Microsoft for parallel programming—TPL DataFlow. The last pattern that we will study is the Map/Reduce pattern. In the modern world, this name could mean very different things. Some people consider map/reduce not as a common approach to any problem but as a concrete implementation for large, distributed cluster computations. We will find out the meaning behind the name of this pattern and review some examples of how it might work in case of small parallel applications. Implementing Lazy-evaluated shared states This recipe shows how to program a Lazy-evaluated thread-safe shared state object. Getting ready To start with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... For implementing Lazy-evaluated shared states, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Threading; using System.Threading.Tasks; Add the following code snippet below the Main method: static async Task ProcessAsynchronously() { var unsafeState = new UnsafeState(); Task[] tasks = new Task[4]; for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(unsafeState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var firstState = new DoubleCheckedLocking(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(firstState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var secondState = new BCLDoubleChecked(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(secondState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var thirdState = new Lazy<ValueToAccess>(Compute); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(thirdState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var fourthState = new BCLThreadSafeFactory(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(fourthState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); } static void Worker(IHasValue state) { Console.WriteLine("Worker runs on thread id {0}",Thread .CurrentThread.ManagedThreadId); Console.WriteLine("State value: {0}", state.Value.Text); } static void Worker(Lazy<ValueToAccess> state) { Console.WriteLine("Worker runs on thread id {0}",Thread .CurrentThread.ManagedThreadId); Console.WriteLine("State value: {0}", state.Value.Text); } static ValueToAccess Compute() { Console.WriteLine("The value is being constructed on athread id {0}", Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromSeconds(1)); return new ValueToAccess(string.Format("Constructed on thread id {0}", Thread.CurrentThread.ManagedThreadId)); } class ValueToAccess { private readonly string _text; public ValueToAccess(string text) { _text = text; } public string Text { get { return _text; } } } class UnsafeState : IHasValue { private ValueToAccess _value; public ValueToAccess Value { get { if (_value == null) { _value = Compute(); } return _value; } } } class DoubleCheckedLocking : IHasValue { private object _syncRoot = new object(); private volatile ValueToAccess _value; public ValueToAccess Value { get { if (_value == null) { lock (_syncRoot) { if (_value == null) _value = Compute(); } } return _value; } } } class BCLDoubleChecked : IHasValue { private object _syncRoot = new object(); private ValueToAccess _value; private bool _initialized = false; public ValueToAccess Value { get { return LazyInitializer.EnsureInitialized( ref _value, ref _initialized, ref _syncRoot,Compute); } } } class BCLThreadSafeFactory : IHasValue { private ValueToAccess _value; public ValueToAccess Value { get { return LazyInitializer.EnsureInitialized(ref _value,Compute); } } } interface IHasValue { ValueToAccess Value { get; } } Add the following code snippet inside the Main method: var t = ProcessAsynchronously(); t.GetAwaiter().GetResult(); Console.WriteLine("Press ENTER to exit"); Console.ReadLine(); Run the program. How it works... The first example show why it is not safe to use the UnsafeState object with multiple accessing threads. We see that the Construct method was called several times, and different threads use different values, which is obviously not right. To fix this, we can use a lock when reading the value, and if it is not initialized, create it first. It will work, but using a lock with every read operation is not efficient. To avoid using locks every time, there is a traditional approach called the double-checked locking pattern. We check the value for the first time, and if is not null, we avoid unnecessary locking and just use the shared object. However, if it was not constructed yet, we use the lock and then check the value for the second time, because it could be initialized between our first check and the lock operation. If it is still not initialized, only then we compute the value. We can clearly see that this approach works with the second example—there is only one call to the Construct method, and the first-called thread defines the shared object state. Please note that if the lazy- evaluated object implementation is thread-safe, it does not automatically mean that all its properties are thread-safe as well. If you add, for example, an int public property to the ValueToAccess object, it will not be thread-safe; you still have to use interlocked constructs or locking to ensure thread safety. This pattern is very common, and that is why there are several classes in the Base Class Library to help us. First, we can use the LazyInitializer.EnsureInitialized method, which implements the double-checked locking pattern inside. However, the most comfortable option is to use the Lazy<T> class that allows us to have thread-safe Lazy-evaluated, shared state, out of the box. The next two examples show us that they are equivalent to the second one, and the program behaves the same. The only difference is that since LazyInitializer is a static class, we do not have to create a new instance of a class as we do in the case of Lazy<T>, and therefore the performance in the first case will be better in some scenarios. The last option is to avoid locking at all, if we do not care about the Construct method. If it is thread-safe and has no side effects and/or serious performance impacts, we can just run it several times but use only the first constructed value. The last example shows the described behavior, and we can achieve this result by using another LazyInitializer.EnsureInitialized method overload. Implementing Parallel Pipeline with BlockingCollection This recipe will describe how to implement a specific scenario of a producer/consumer pattern, which is called Parallel Pipeline, using the standard BlockingCollection data structure. Getting ready To begin this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... To understand how to implement Parallel Pipeline using BlockingCollection, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Collections.Concurrent; using System.Linq; using System.Threading; using System.Threading.Tasks; Add the following code snippet below the Main method: private const int CollectionsNumber = 4; private const int Count = 10; class PipelineWorker<TInput, TOutput> { Func<TInput, TOutput> _processor = null; Action<TInput> _outputProcessor = null; BlockingCollection<TInput>[] _input; CancellationToken _token; public PipelineWorker( BlockingCollection<TInput>[] input, Func<TInput, TOutput> processor, CancellationToken token, string name) { _input = input; Output = new BlockingCollection<TOutput>[_input.Length]; for (int i = 0; i < Output.Length; i++) Output[i] = null == input[i] ? null : new BlockingCollection<TOutput>(Count); _processor = processor; _token = token; Name = name; } public PipelineWorker( BlockingCollection<TInput>[] input, Action<TInput> renderer, CancellationToken token, string name) { _input = input; _outputProcessor = renderer; _token = token; Name = name; Output = null; } public BlockingCollection<TOutput>[] Output { get; private set; } public string Name { get; private set; } public void Run() { Console.WriteLine("{0} is running", this.Name); while (!_input.All(bc => bc.IsCompleted) && !_token.IsCancellationRequested) { TInput receivedItem; int i = BlockingCollection<TInput>.TryTakeFromAny( _input, out receivedItem, 50, _token); if (i >= 0) { if (Output != null) { TOutput outputItem = _processor(receivedItem); BlockingCollection<TOutput>.AddToAny(Output,outputItem); Console.WriteLine("{0} sent {1} to next,on thread id {2}", Name, outputItem,Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); } else { _outputProcessor(receivedItem); } } else { Thread.Sleep(TimeSpan.FromMilliseconds(50)); } } if (Output != null) { foreach (var bc in Output) bc.CompleteAdding(); } } } Add the following code snippet inside the Main method: var cts = new CancellationTokenSource(); Task.Run(() => { if (Console.ReadKey().KeyChar == 'c') cts.Cancel(); }); var sourceArrays = new BlockingCollection<int>[CollectionsNumber]; for (int i = 0; i < sourceArrays.Length; i++) { sourceArrays[i] = new BlockingCollection<int>(Count); } var filter1 = new PipelineWorker<int, decimal> (sourceArrays, (n) => Convert.ToDecimal(n * 0.97), cts.Token, "filter1" ); var filter2 = new PipelineWorker<decimal, string> (filter1.Output, (s) => String.Format("--{0}--", s), cts.Token, "filter2" ); var filter3 = new PipelineWorker<string, string> (filter2.Output, (s) => Console.WriteLine("The final result is {0} onthread id {1}", s, Thread.CurrentThread.ManagedThreadId), cts.Token,"filter3"); try { Parallel.Invoke( () => { Parallel.For(0, sourceArrays.Length * Count,(j, state) => { if (cts.Token.IsCancellationRequested) { state.Stop(); } int k = BlockingCollection<int>.TryAddToAny(sourceArrays, j); if (k >= 0) { Console.WriteLine("added {0} to source data onthread id {1}", j, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); } }); foreach (var arr in sourceArrays) { arr.CompleteAdding(); } }, () => filter1.Run(), () => filter2.Run(), () => filter3.Run() ); } catch (AggregateException ae) { foreach (var ex in ae.InnerExceptions) Console.WriteLine(ex.Message + ex.StackTrace); } if (cts.Token.IsCancellationRequested) { Console.WriteLine("Operation has been canceled!Press ENTER to exit."); } else { Console.WriteLine("Press ENTER to exit."); } Console.ReadLine(); Run the program. How it works... In the preceding example, we implement one of the most common parallel programming scenarios. Imagine that we have some data that has to pass through several computation stages, which take a significant amount of time. The latter computation requires the results of the former, so we cannot run them in parallel. If we had only one item to process, there would not be many possibilities to enhance the performance. However, if we run many items through the set of same computation stages, we can use a Parallel Pipeline technique. This means that we do not have to wait until all items pass through the first computation stage to go to the next one. It is enough to have just one item that finishes the stage, we move it to the next stage, and meanwhile the next item is being processed by the previous stage, and so on. As a result, we almost have parallel processing shifted by a time required for the first item to pass through the first computation stage. Here, we use four collections for each processing stage, illustrating that we can process every stage in parallel as well. The first step that we do is to provide a possibility to cancel the whole process by pressing the C key. We create a cancellation token and run a separate task to monitor the C key. Then, we define our pipeline. It consists of three main stages. The first stage is where we put the initial numbers on the first four collections that serve as the item source to the latter pipeline. This code is inside the Parallel.For loop, which in turn is inside the Parallel.Invoke statement, as we run all the stages in parallel; the initial stage runs in parallel as well. The next stage is defining our pipeline elements. The logic is defined inside the PipelineWorker class. We initialize the worker with the input collection, provide a transformation function, and then run the worker in parallel with the other workers. This way we define two workers, or filters, because they filter the initial sequence. One of them turns an integer into a decimal value, and the second one turns a decimal to a string. Finally, the last worker just prints every incoming string to the console. Everywhere we provide a running thread ID to see how everything works. Besides this, we added artificial delays, so the items processing will be more natural, as we really use heavy computations. As a result, we see the exact expected behavior. First, some items are being created on the initial collections. Then, we see that the first filter starts to process them, and as they are being processed, the second filter starts to work, and finally the item goes to the last worker that prints it to the console. Implementing Parallel Pipeline with TPL DataFlow This recipe shows how to implement a Parallel Pipeline pattern with the help of TPL DataFlow library. Getting ready To start with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe could be found at Packt site. How to do it... To understand how to implement Parallel Pipeline with TPL DataFlow, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. Add references to the Microsoft TPL DataFlow NuGet package. Right-click on the References folder in the project and select the Manage NuGet Packages... menu option. Now add your preferred references to the Microsoft TPL DataFlow NuGet package. You can use the search option in the Manage NuGet Packages dialog as follows: In the Program.cs file, add the following using directives: using System; using System.Threading; using System.Threading.Tasks; using System.Threading.Tasks.Dataflow; Add the following code snippet below the Main method: async static Task ProcessAsynchronously() { var cts = new CancellationTokenSource(); Task.Run(() => { if (Console.ReadKey().KeyChar == 'c') cts.Cancel(); }); var inputBlock = new BufferBlock<int>( new DataflowBlockOptions { BoundedCapacity = 5, CancellationToken = cts.Token }); var filter1Block = new TransformBlock<int, decimal>( n => { decimal result = Convert.ToDecimal(n * 0.97); Console.WriteLine("Filter 1 sent {0} to the nextstage on thread id {1}", result, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); return result; }, new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); var filter2Block = new TransformBlock<decimal, string>( n => { string result = string.Format("--{0}--", n); Console.WriteLine("Filter 2 sent {0} to the nextstage on thread id {1}", result, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); return result; }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); var outputBlock = new ActionBlock<string>( s => { Console.WriteLine("The final result is {0} on threadid {1}", s, Thread.CurrentThread.ManagedThreadId); }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); inputBlock.LinkTo(filter1Block, new DataflowLinkOptions {PropagateCompletion = true }); filter1Block.LinkTo(filter2Block, new DataflowLinkOptions { PropagateCompletion = true }); filter2Block.LinkTo(outputBlock, new DataflowLinkOptions { PropagateCompletion = true }); try { Parallel.For(0, 20, new ParallelOptions {MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }, i => { Console.WriteLine("added {0} to source data on threadid {1}", i, Thread.CurrentThread.ManagedThreadId); inputBlock.SendAsync(i).GetAwaiter().GetResult(); }); inputBlock.Complete(); await outputBlock.Completion; Console.WriteLine("Press ENTER to exit."); } catch (OperationCanceledException) { Console.WriteLine("Operation has been canceled!Press ENTER to exit."); } Console.ReadLine(); } Add the following code snippet inside the Main method: var t = ProcessAsynchronously(); t.GetAwaiter().GetResult(); Run the program. How it works... In the previous recipe, we have implemented a Parallel Pipeline pattern to process items through sequential stages. It is quite a common problem, and one of the proposed ways to program such algorithms is using a TPL DataFlow library from Microsoft. It is distributed via NuGet, and is easy to install and use in your application. The TPL DataFlow library contains different type of blocks that can be connected with each other in different ways and form complicated processes that can be partially parallel and sequential where needed. To see some of the available infrastructure, let's implement the previous scenario with the help of the TPL DataFlow library. First, we define the different blocks that will be processing our data. Please note that these blocks have different options that can be specified during their construction; they can be very important. For example, we pass the cancellation token into every block we define, and when we signal the cancellation, all of them will stop working. We start our process with BufferBlock. This block holds items to pass it to the next blocks in the flow. We restrict it to the five-items capacity, specifying the BoundedCapacity option value. This means that when there will be five items in this block, it will stop accepting new items until one of the existing items pass to the next blocks. The next block type is TransformBlock. This block is intended for a data transformation step. Here we define two transformation blocks, one of them creates decimals from integers, and the second one creates a string from a decimal value. There is a MaxDegreeOfParallelism option for this block, specifying the maximum simultaneous worker threads. The last block is the ActionBlock type. This block will run a specified action on every incoming item. We use this block to print our items to the console. Now, we link these blocks together with the help of the LinkTo methods. Here we have an easy sequential data flow, but it is possible to create schemes that are more complicated. Here we also provide DataflowLinkOptions with the PropagateCompletion property set to true. This means that when the step completes, it will automatically propagate its results and exceptions to the next stage. Then we start adding items to the buffer block in parallel, calling the block's Complete method, when we finish adding new items. Then we wait for the last block to complete. In case of a cancellation, we handle OperationCancelledException and cancel the whole process. Implementing Map/Reduce with PLINQ This recipe will describe how to implement the Map/Reduce pattern while using PLINQ. Getting ready To begin with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... To understand how to implement Map/Reduce with PLINQ, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Collections.Generic; using System.IO; using System.Linq; Add the following code snippet below the Main method: private static readonly char[] delimiters =Enumerable.Range(0, 256). Select(i => (char)i).Where(c =>!char.IsLetterOrDigit(c)).ToArray(); private const string textToParse = @" Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen, and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me , that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people's hats off - then, I account it high time to get to sea as soon as I can. ― Herman Melville, Moby Dick. "; Add the following code snippet inside the Main method: var q = textToParse.Split(delimiters) .AsParallel() .MapReduce( s => s.ToLower().ToCharArray() , c => c , g => new[] {new {Char = g.Key, Count = g.Count()}}) .Where(c => char.IsLetterOrDigit(c.Char)) .OrderByDescending( c => c.Count); foreach (var info in q) { Console.WriteLine("Character {0} occured in the text {1}{2}", info.Char, info.Count, info.Count == 1 ? "time" : "times"); } Console.WriteLine(" -------------------------------------------"); const string searchPattern = "en"; var q2 = textToParse.Split(delimiters) .AsParallel() .Where(s => s.Contains(searchPattern)) .MapReduce( s => new [] {s} , s => s , g => new[] {new {Word = g.Key, Count = g.Count()}}) .OrderByDescending(s => s.Count); Console.WriteLine("Words with search pattern '{0}':",searchPattern); foreach (var info in q2) { Console.WriteLine("{0} occured in the text {1} {2}",info.Word, info.Count, info.Count == 1 ? "time" : "times"); } int halfLengthWordIndex = textToParse.IndexOf(' ',textToParse.Length/2); using(var sw = File.CreateText("1.txt")) { sw.Write(textToParse.Substring(0, halfLengthWordIndex)); } using(var sw = File.CreateText("2.txt")) { sw.Write(textToParse.Substring(halfLengthWordIndex)); } string[] paths = new[] { ".\" }; Console.WriteLine(" ------------------------------------------------"); var q3 = paths .SelectMany(p => Directory.EnumerateFiles(p, "*.txt")) .AsParallel() .MapReduce( path => File.ReadLines(path).SelectMany(line =>line.Trim(delimiters).Split (delimiters)),word => string.IsNullOrWhiteSpace(word) ? 't' :word.ToLower()[0], g => new [] { new {FirstLetter = g.Key, Count = g.Count()}}) .Where(s => char.IsLetterOrDigit(s.FirstLetter)) .OrderByDescending(s => s.Count); Console.WriteLine("Words from text files"); foreach (var info in q3) { Console.WriteLine("Words starting with letter '{0}'occured in the text {1} {2}", info.FirstLetter,info.Count, info.Count == 1 ? "time" : "times"); } Add the following code snippet after the Program class definition: static class PLINQExtensions { public static ParallelQuery<TResult> MapReduce<TSource,TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source.SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } } Run the program. How it works... The Map/Reduce functions are another important parallel programming pattern. It is suitable for a small program and large multi-server computations. The meaning of this pattern is that you have two special functions to apply to your data. The first of them is the Map function. It takes a set of initial data in a key/value list form and produces another key/value sequence, transforming the data to the comfortable format for further processing. Then we use another function called Reduce. The Reduce function takes the result of the Map function and transforms it to a smallest possible set of data that we actually need. To understand how this algorithm works, let's look through the recipe. First, we define a relatively large text in the string variable: textToParse. We need this text to run our queries on. Then we define our Map/Reduce implementation as a PLINQ extension method in the PLINQExtensions class. We use SelectMany to transform the initial sequence to the sequence we need by applying the Map function. This function produces several new elements from one sequence element. Then we choose how we group the new sequence with the keySelector function, and we use GroupBy with this key to produce an intermediate key/value sequence. The last thing we do is applying Reduce to the resulting grouped sequence to get the result. In our first example, we split the text into separate words, and then we chop each word into character sequences with the help of the Map function, and group the result by the character value. The Reduce function finally transforms the sequence into a key value pair, where we have a character and a number for the times it was used in the text ordered by the usage. Therefore, we are able to count each character appearance in the text in parallel (since we use PLINQ to query the initial data). The next example is quite similar, but now we use PLINQ to filter the sequence leaving only the words containing our search pattern, and we then get all those words sorted by their usage in the text. Finally, the last example uses file I/O. We save the sample text on the disk, splitting it into two files. Then we define the Map function as producing a number of strings from the directory name, which are all the words from all the lines in all text files in the initial directory. Then we group those words by the first letter (filtering out the empty strings) and use reduce to see which letter is most often used as the first word letter in the text. What is nice is that we can easily change this program to be distributed by just using other implementations of map and reduce functions, and we still are able to use PLINQ with them to make our program easy to read and maintain. Summary In this article we covered implementing lazy-evaluated shared states, implementing Parallel Pipeline using BlockingCollection and TPL DataFlow, and finally we covered the implementation of Map/Reduce with PLINQ. Resources for Article: Further resources on this subject: Simplifying Parallelism Complexity in C# [Article] Watching Multiple Threads in C# [Article] Low-level C# Practices [Article]
Read more
  • 0
  • 0
  • 5511

article-image-article-exploring-rubymine
Packt
25 Nov 2013
8 min read
Save for later

Exploring RubyMine

Packt
25 Nov 2013
8 min read
(For more resources related to this topic, see here.) Managing your implants (Should know) Now, we will learn how to utilize RubyMine to manage your Gems and external libraries used for your Ruby on Rails programs. Getting ready Start RubyMine and open the HelloWorld project that we created earlier. We will be adding an implant to enhance your assimilation process into the collective. How to do it... We will now use a Gem in RubyMine by showing the following simple example using the Prawn Gem: Right-click on the HelloWorld project folder in the left project panel and add a new file to your project. Name it pdf.rb. Add the following code to this new file: require 'prawn' Prawn::Document.generate("hello.pdf") do |pdf| pdf.font "Courier" pdf.stroke_color = "ff0000" pdf.line_width = 5 pdf.stroke do pdf.circle([300, 300], 100); pdf.circle([350, 320], 20); pdf.circle([260, 325], 20); pdf.curve [270, 250], [340, 250], :bounds => [[270, 270],[275, 220]] end pdf.move_down(300) pdf.font_size = 36 pdf.fill_color "000000" pdf.draw_text("You will be assimilated!", :at => [40, 40]) pdf.line_width = 1 pdf.stroke_color = "0000ff" pdf.move_to([0, 0]) grid = 11 num_lines = 51 size = (num_lines - 1) * grid pdf.stroke do 51.times do |idx| pdf.line([0, idx*grid], [size, idx*grid]) pdf.line([idx*grid, 0], [idx*grid, size]) end end end Now, right-click on the open source file and select Run pdf. You should get an error similar to in 'require': cannot load such file -- prawn (LoadError). This means that you do not have the required technology to continue the assimilation process. We need to add some Ruby Gems to allow our program to continue. Open the Settings panel and select the Ruby SDK and Gems panel from the left-hand list. This is where we also changed the version of Ruby that we are using. This panel will allow us to install some specific Ruby Gems that we need. Hit the button Install Gems... and you will see a screen like the following: Start typing the name of the Gem you wish to install, which in this case is prawn and the search will begin immediately. Scroll to the right Gem. Select the Gem in the list at the left-hand side and then hit the button Install. RubyMine will then run the Gem system and install the Gem and all of its appropriate dependencies into your system. When it is complete, select the prawn gem in the list on the right and you will see the description panel filled with various aspects of the gem for your browsing pleasure. Once completed, go back and re-run the pdf.rb program. Since this program actually generates a PDF file, we need to find where it saved the file. In the project window on the left, hit the icon that looks like the following: This will synchronize the files inside the folder with the project list. You should now be able to see the file hello.pdf in the project window. Double-click on this and you will see this file in whichever application you have on your computer that displays PDF files. You should see something like the following: How it works The code that we typed in is a simple use of the Prawn Gem. It first requires the Gem code and then starts a block of code that will begin generating the hello.pdf file on the disk. Each command then sets properties of the text, size, colors, circles, and finally draws a grid of lines on the page. Creating your first progeny (Should know) Now it is time to create and run a simple Ruby on Rails application using RubyMine exclusively. Getting ready Open RubyMine and navigate to File | New Project. From this window, you can now select what type of project to begin with. RubyMine gives you several options that you can use later. Right now, select Rails application, as shown in the following screenshot: Hit OK and you will see the next settings window which allows you to select which version of Rails you would like to use in your project, along with the JavaScript library and database configurations. Select the checkbox for Preconfigure for selected database and choose the sqlite3 option, as shown in the following screenshot. Leave the rest as default and hit OK. How to do it... Now that you have created a new Rails project, RubyMine takes over and starts using Rails to generate all of the files and configuration necessary for a complete project, including running the command bundle install at the end. This will make sure that you have all the proper Gems installed for your Rails project. Now, we can run what we have and see that everything is installed correctly, using the following steps: At the top of the window, select the green arrow to run the Development: Progeny configuration. Running, in this case, means that it will start the default Rails webserver, Webrick, and begin listening on port 3000 for web browser requests. Once it is running, open your favorite browser and go to the address http://localhost:3000, and you should see the opening Rails Welcome Aboard screen. Click on the link About your application's environment, you can see some details about your installation including the various version numbers of Rails and Ruby, and the database that your application will be using. Now, we can build some features for our app. Let's begin by creating an interface to our database of species that we have already assimilated. For this, we can use the Rails generators to build the scaffolding of our application that gives us something to build on. All of the functionalities of Rails can be accessed from within the RubyMine environment, as shown in the following steps, so it is not necessary to go out to a command window at all: To run the generators, we just need to navigate to Tools | Run Rails Generator. As you type, the list is filtered to match your selection. Select the scaffold generator and hit Return: The following window gives us the opportunity to name our table and the various fields that it will contain. The database table will become the name of our model as well. Type the following into the top text box: Species name:string identification:integer assimilated:Boolean Leave the other settings at their default and hit OK, as shown in the following screenshot: Now if you reload your browser window, you should see an error like ActiveRecord:: PendingMigrationError. Oops! We forgot to do something after creating the code for the database tables. We need to run the migrations that will create the actual tables. Navigate to Tools | Run Rake Task… and start typing db:migrate. Just like the generator window, the various rake tasks will begin to be filtered by what you type. Hit Return and then select the defaults in the next window Latest migration and the migrations will be run for you. Now that the tables are created, point your browser to a new location, http://localhost:3000/species. You will see the index page for your Species database table, similar to the following screenshot: Click on the links and add some species to your database. The scaffolding that we generated, produced all of the CRUD (Create Read Update Delete) forms and screens necessary for managing our Species database table, without typing any commands in our terminal or leaving RubyMine at all. Of course it has no style, but who cares about style? We are the Borg and only care about technology! Ok. Maybe we can spruce it up a little bit. Lets add a Gem called Bourbon that helps with the following: Open the Gemfile from the project window and add the following line to it: gem 'bourbon' Now we need to install the Gem by running bundle install. We can do this directly from RubyMine by navigating to Tools | Bundler | Install. Hit the Install button on the window that shows and the Gem will be installed correctly. Now we can edit the CSS file that is in the App/Assets folder called species.css.scss. Open this file and add the following CSS code to the file: @import "bourbon"; p { @include linear-gradient(to top, white, steelblue); } Reload the page that shows one of the species that you created such as http://localhost:3000/species/1, as shown in the following screenshot. Now isn't that much better? There's more... Once complete, look at the configuration menu and you will notice that you now have some additional commands in this shortcut menu. RubyMine remembers the tasks and commands that you have executed for you to make it more efficient, as you will likely be running these commands often. Summary This is a technology that is worthy of assimilating to its fullest extent. In this article we covered Managing your implants and Creating your first progeny, which shows create and run a simple Ruby on Rails application using RubyMine. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion [Article] Introducing RubyMotion and the Hello World app [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article]
Read more
  • 0
  • 0
  • 1275

article-image-working-java-se-projects-should-know
Packt
25 Nov 2013
5 min read
Save for later

Working with Java SE projects (Should know)

Packt
25 Nov 2013
5 min read
(For more resources related to this topic, see here.) How to do it... The following steps will show you how to experiment with JRebel on a Java SE application: Create a simple Swing application and test it. Enable JRebel on your project. Try live code injection with JRebel. How it works... We will create a simple Swing application, a frame that contains a label and a button. The action associated with the button will update the label. We will use JRebel to change the button's action without recompiling or restarting the application. Start NetBeans and create a new Java application project. Create a package, delete the default main class, and use the NetBeans assistant to create a new JFrame object by navigating to File | New File | Swing GUI Forms | JFrame Form, and then choose a name and validate. The Projects view Use the Palette tab on your right to drag-and-drop a JLabel and a JButton to your JForm (these two components are located in the Swing Controls section, under the Label and Button names). The Palette tab Now, double-click on the button you have generated. You will be redirected to the jButton1ActionPerformed method of your JFrame object. private void jButton1ActionPerformed (java.awt.event.ActionEvent evt) { // TODO add your handling code here: } Insert a code to update JLabel of your form, as shown in the following code: private void jButton1ActionPerformed (java.awt.event.ActionEvent evt) { // TODO add your handling code here: jLabel1.setText("Hello World"); } The application is now ready for testing. You can test it by pressing F6 key. NetBeans will ask you for the name of the main class. Select the JFrame form and validate. Use the jButton1 button to update the label. The first Hello World window Do not close the application immediately, and return to the code editor to update the jButton1 button's action in order to display a new message and save the new code. private void jButton1ActionPerformed (java.awt.event.ActionEvent evt) { // TODO add your handling code here: // jLabel1.setText("Hello World"); // previous message jLabel1.setText("Hello Planet"); // new message } Hit the jButton1 button in your application again; the change is not effective. Actually, your application hasn't been updated and it can't reflect code changes immediately. You will have to restart your application to see the new behavior, and this is normal. Now, let's see how JRebel will accelerate the development. You may have noticed the presence of a JRebel node in the Projects view. Right-click on it and choose Generate rebel.xml. The JRebel XML The rebel.xml file will contain the path of the compiled classes. JRebel will use these compiled classes to update your running application. Also, that means we have to ensure the Compile on Save feature is turned on for your project. Every time you apply changes to a Java class, NetBeans will recompile it and JRebel will update your running application with it. To proceed, go to the Projects Properties panel and enable the Compile on Save feature. Enabling the Compile on Save feature To finish, you have to activate JRebel (locate the JRebel button on the NetBeans toolbar). Enabling the JRebel button Now, restart your application and hit the jButton1 button a first time. Return to the code editor, modify the button's action to display a new message, save, and carefully observe the NetBeans console; you will see two interesting messages: Firstly, JRebel indicates if your license is active Secondly, you can see a message that indicates a class which has been reloaded, as follows: 2013-07-28 15:08:03 JRebel: Reloading class 'demo.ant.OurJFrame'. 2013-07-28 15:08:03 JRebel: Reloading class 'demo.ant. OurJFrame$2'. 2013-07-28 15:08:03 JRebel: Reloading class 'demo.ant. OurJFrame$1'. Hit the jButton1 button again; the message changes without restarting the application. It works! You can now continue to update and save code in order to see the changes. Try to update the displayed message again, it works again; you don't need to restart your application again (except for some changes that are not supported by JRebel). There's more... Ant is not the only build system, so you may want to use JRebel with Maven-based projects. Maven- and Ant-based projects are handled the same way: generate a rebel.xml file, activate the Compile on Save feature, enable JRebel, and simply update your code without restarting your application. You don't have to deal with the Maven pom.xml file. Also, you will still see the two same JRebel messages in the NetBeans console. They are useful to check the JRebel license and class reloading. Last but not least, to activate the Compile On Save feature, the Compile On Save menu should be set to For application execution only (or better). Maven's Compile On Save feature Summary In this article, we learnt how to work with JRebel on the Ant Java SE projects. We also learnt the about Maven Java SE projects support. Resources for Article: Further resources on this subject: The Business Layer (Java EE 7 First Look) [Article] Developing Secure Java EE Applications in GlassFish [Article] Service Oriented Java Business Integration - What's & Why's [Article]
Read more
  • 0
  • 0
  • 3219
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-introducing-salesforce-chatter
Packt
21 Nov 2013
5 min read
Save for later

Introducing Salesforce Chatter

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) An overview of cloud computing Cloud computing is a subscription-based service that provides us with computing resources and networked storage space. It allows you to access your information anytime and from anywhere. The only requirement is that one must have an Internet connection. That's all. If you have a cloud-based setup, there is no need to maintain the server in the future. We can think of cloud computing as similar to our e-mail account. Think of your accounts such as Gmail, Hotmail, and so on. We just need a web browser and an Internet connection to access our information. We do not need separate software to access our e-mail account; it is different from the text editor installed on our computer. There is no need of physically moving storage and information; everything is up and running over there and not at our end. It is the same with cloud; we choose what has to be stored and accessed on cloud. You also don't have to pay an employee or contractor to maintain the server since it is based on the cloud. While traditional technologies and computer setup require you to be physically present at the same place to access information, cloud removes this barrier and allows us to access information from anywhere. Cloud computing helps businesses to perform better by allowing employees to work from remote locations (anywhere on the globe). It provides mobile access to information and flexibility to the working of a business organization. Depending on your needs, we can subscribe to the following type of clouds: Public cloud: This cloud can be accessed by subscribers who have an Internet connection and access to cloud storage Private cloud: This is accessed by a limited group of people or members of an organization Community cloud: This is a cloud that is shared between two or more organizations that have similar requirements Hybrid cloud: This is a combination of at least two clouds, where the clouds are a combination of public, private, or community Depending on your need, you have the ability to subscribe to a specific cloud provider. Cloud providers follow the pay-as-you-go method. It means that, if your technological needs change, you can purchase more and continue working on cloud. You do not have to worry about the storage configuration and management of servers because everything is done by your cloud provider. An overview of salesforce.com Salesforce.com is the leader in pay-as-you-go enterprise cloud computing. It specializes in CRM software products for sales and customer services and supplies products for building and running business apps. Salesforce has recently developed a social networking product called Chatter for its business apps. With the concept of no software or hardware required, we are up and running and seeing immediate positive effects on our business. It is a platform for creating and deploying apps for social enterprise. This does not require us to buy or manage servers, software, or hardware. Here you can focus fully on building apps that include mobile functionality, business processes, reporting, and search. All apps run on secure servers and proven services that scale, tune, and back up data automatically. Collaboration in the past Collaboration always plays a key role to improve business outcomes; it is a crucial necessity in any professional business. The central meaning of communication has changed over time. With changes in people's individual living situations as well as advancements in technology, how one communicates with the rest of the world has been altered. A century or two ago, people could communicate using smoke signals, carrier pigeons and drum beats, or speak to one another, that is, face-to-face communication. As the world and technology developed, we found that we could send longer messages from long distances with ease. This has caused a decline in face-to-face-interaction and a substantial growth in communication via technology. The old way of face-to-face interaction impacted the business process as there was a gap between the collaboration of the client, company, or employees situated in distant places. So it reduced the profit, ROI, as well as customer satisfaction. In the past, there was no faster way available for communication, so collaboration was a time-consuming task for business; its effect was the loss of client retention. Imagine a situation where a sales representative is near to closing a deal, but the decision maker is out of the office. In the past, there was no fast/direct way to communicate. Sometimes this lack of efficient communication impacted the business negatively, in addition to the loss of potential opportunities. Summary In this article we learned cloud computing and Salesforce.com, and discussed about collaboration in the new era by comparing it to the ancient age. We discussed and introduced Salesforce Chatter and its effect on ROI (Return of Investment). Resources for Article: Further resources on this subject: Salesforce CRM Functions [Article] Configuration in Salesforce CRM [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 1476

article-image-exploring-model-view-controller
Packt
21 Nov 2013
5 min read
Save for later

Exploring Model View Controller

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Many applications start from something small, such as several hundred lines of code prototype of a toy application written in one evening. When you add new features and the application code clutters, it becomes much harder to understand how it works and to modify it, especially for a newcomer. The Model-View-Controller (MVC) pattern serves as the basis for software architecture that will be easily maintained and modified. The main idea of MVC is about separating an application into three parts: model, view, and controller. There is an easy way to understand MVC—the model is the data and its business logic, the view is the window on the screen, and the controller is the glue between the two. While the view and controller depend on the model, the model is independent of the presentation or the controller. This is a key feature of the division. It allows you to work with the model, and hence, the business logic of the application, regardless of the visual presentation. The following diagram shows the flow of interaction between the user, controller, model, and view. Here, a user makes a request to the application and the controller does the initial processing. After that it manipulates the model, creating, updating, or deleting some data there. The model returns some result to the controller, that passes the result to view, which renders data to the user. The MVC pattern gained wide popularity in web development. Many Python web frameworks, such as web2py, Pyramid, Django (uses a flavor of MVC called MVP), Giotto, and Kiss use it. Let's review key components of the MVC pattern in more detail. Model – the knowledge of the application The model is a cornerstone of the application because, while the view and controller depend on the model, the model is independent of the presentation or the controller. The model provides knowledge: data, and how to work with that data. The model has a state and methods for changing its state but does not contain information on how this knowledge can be visualized. This independence makes working independently, covering the model with tests and substituting the controllers/views without changing the business logic of an application. The model is responsible for maintaining the integrity of the program's data, because if that gets corrupted then it's game over for everyone. The following are recommendations for working with models: Strive to perform the following for models: Create data models and interface of work with them Validate data and report all errors to the controller Avoid working directly with the user interface View – the appearance of knowledge View receives data from the model through the controller and is responsible for its visualization. It should not contain complex logic; all such logic should go to the models and controllers. If you need to change the method of visualization, for example, if you need your web application to be rendered differently depending on whether the user is using a mobile phone or desktop browser, you can change the view accordingly. This can include HTML, XML, console views, and so on. The recommendation for working with views are as follows: Strive to perform the following for views: Try to keep them simple; use only simple comparisons and loops Avoid doing the following in views: Accessing the database directly Using any logic other than loops and conditional statements (if-then-else) because the separation of concerns requires all such complex logic to be performed in models Controller – the glue between the model and view The direct responsibility of the controllers is to receive data from the request and send it to other parts of the system. Only in this case, the controller is "thin" and is intended only as a bridge (glue layer) between the individual components of the system. Let's look at the following recommendations for working with controllers: Strive to perform the following in controllers: Pass data from user requests to the model for processing, retrieving and saving the data Pass data to views for rendering Handle all request errors and errors from models Avoid the following in controllers: Render data Work with the database and business logic directly Thus, in one statement: We need smart models, thin controllers, and dumb views. Benefits of using the MVC MVC brings a lot of positive attributes to your software, including the following: Decomposition allows you to logically split the application into three relatively independent parts with loose coupling and will decrease its complexity. Developers typically specialize in one area, for example, a developer might create a user interface or modify the business logic. Thus, it's possible to limit their area of responsibility to only some part of code. MVC makes it possible to change visualization, thus modifying the view without changes in the business logic. MVC makes it possible to change business logic, thus modifying the model without changes in visualization. MVC makes it possible to change the response to a user action (clicking on the button with the mouse, data entry) without changing the implementation of views; it is sufficient to use a different controller. Summary It is important to separate the areas of responsibility to maintain loose coupling and for the maintainability of the software. MVC divides the application into three relatively independent parts: model, view, and controller. The model is all about knowledge, data, and business logic. The view is about presentation to the end users, and it's important to keep it simple. The controller is the glue between the model and the view, and it's important to keep it thin. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 2941

article-image-using-memcached-python
Packt
20 Nov 2013
5 min read
Save for later

Using memcached with Python

Packt
20 Nov 2013
5 min read
(For more resources related to this topic, see here.) If you want to make such a connection, there are several clients available for you. The most popular ones are: python-memcached: This is a pure-python implementation of the memcached client (implemented 100 percent in Python). It offers good performance and is extremely simple to install and use. pylibmc: This is a Python wrapper on the libmemcached C/C++ library, it offers excellent performance, thread safety, and light memory usage, yet it's not as simple as python-memcached to install, since you will need to have the libmemcached library compiled and installed on your system. Twisted memcache: This client is part of the Python twisted event-driven networking engine for Python. It offers a reactive code structure and excellent performance as well, but it is not as simple to use as pylibmc or python-memcached but it fits perfectly if your entire application is built on twisted. In this recipe, we will be using python-memcached for the sake of simplicity and since other clients have almost the same API, it does not make much difference from a developer's perspective. Getting ready It's always a good idea to create virtualenv for your experiments to keep your experiments contained and not to pollute the global system with the packages you install. You can create virtualenv easily: virtualenv memcache_experiments source memcache_experiments/bin/activate We will need to install python-memcached first, using the pip package manager on our system: sudo pip install python-memcached How to do it... Let's start with a simple set and get script: import memcache client = memcache.Client([('127.0.0.1', 11211)]) sample_obj = {"name": "Soliman", "lang": "Python"} client.set("sample_user", sample_obj, time=15) print "Stored to memcached, will auto-expire after 15 seconds" print client.get("sample_user") Save the script into a file called memcache_test1.py and run it using python memcache_test1.py. On running the script you should see something like the following: Stored to memcached, will auto-expire after 15 seconds {'lang': 'Python', 'name': 'Soliman'} Let's now try other memcached features: import memcache client = memcache.Client([('127.0.0.1', 11211)]) client.set("counter", "10") client.incr("counter") print "Counter was incremented on the server by 1, now it's %s" % client.get("counter") client.incr("counter", 9) print "Counter was incremented on the server by 9, now it's %s" % client.get("counter") client.decr("counter") print "Counter was decremented on the server by 1, now it's %s" % client.get("counter") The output of the script looks like the following: Counter was incremented on the server by 1, now it's 11 Counter was incremented on the server by 9, now it's 20 Counter was decremented on the server by 1, now it's 19 The incr and decr methods allow you to specify a delta value or to by default increment/decrement by 1. Alright, now let's sync a Python dict to memcached with a certain prefix: import memcache client = memcache.Client([('127.0.0.1', 11211)]) data = {"some_key1": "value1", "some_key2": "value2"} client.set_multi(data, time=15, key_prefix="pfx_") print "saved the dict with prefix pfx_" print "getting one key: %s" % client.get("pfx_some_key1") print "Getting all values: %s" % client.get_multi(["some_key1", "some_ key2"], key_prefix="pfx_") How it works... In this script, we are connecting to the memcached server(s) using the Client constructor, and then we are using the set method to store a standard Python dict as the value of the "sample_user" key. After that we use the get method to retrieve the value. The client automatically serialized the python dict to memcached and deserialized the object after getting it from memcached server. In the second script, we are playing with some of the features we never tried in the memcached server. The incr and decr are methods that allow you to increment and decrement integer values directly on the server automatically. Then, we are using an awesome feature that we also didn't play with before, that is get/set_multi that allows us to set or get multiple key/values at a single request. Also it allows us to add a certain prefix to all the keys during the set or get operations. The output of the last script should look like the following: saved the dict with prefix pfx_ getting one key: value1 Getting all values: {'some_key1': 'value1', 'some_key2': 'value2'} There's more... In the Client constructor, we specified the server hostname and port in a tuple (host, port) and passed that in a list of servers. This allows you to connect to a cluster of memcached servers by adding more servers to this list. For example: client = memcache.Client([('host1', 1121), ('host2', 1121), ('host3', 1122)]) Also, you can also specify custom picklers/unpicklers to tell the memcached client how to serialize or de-serialize the Python types using your custom algorithm. Summary Thus we learned how to connect to memcached servers from your python application. Resources for Article: Further resources on this subject: Working with Databases [Article] Testing and Debugging in Grok 1.0: Part 2 [Article] Debugging AJAX using Microsoft AJAX Library, Internet Explorer and Mozilla Firefox [Article]
Read more
  • 0
  • 0
  • 11601

article-image-understanding-outside
Packt
20 Nov 2013
6 min read
Save for later

Understanding outside-in

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) In this category, developers pick a story or use case and drill into low-level unit tests. Basically, the objective is to obtain high-level design. The different system interfaces are identified and abstracted. Once different layers/interfaces are identified, unit-level coding can be started. Here, developers look at the system boundary and create a boundary interface depending upon a use case/story. Then, collaborating classes are created. Mock objects can act as a collaborating class. This approach of development relies on code for abstraction. Acceptance Test-Driven Development (ATDD) falls into this category. FitNesse fixtures provide support for ATDD. This is stateful. It is also known as the top-down approach. An example of ATDD As a health professional (doctor), I should get the health information of all my critical patients who are admitted as soon as I'm around 100 feet from the hospital. Here, how patient information is periodically stored in the server, how GPS tracks the doctor, how the system registers the doctor's communication medium (sends report to the doctor by e-mail or text), and so on can be mocked out. A test will be conducted to deal with how to fetch patient data for a doctor and send the report quickly. Gradually, other components will be coded. The preceding figure represents the high-level components of the user story. Here, Report Dispatcher is notified when a doctor approaches a hospital, then the dispatcher fetches patient information for the doctor and sends him the patient record. Patient Information, Outbound Messaging Interface, and Location Sensor (GPS) parts are mocked, and the dispatcher acts as a collaborator. Now we have a requirement to calculate the taxable income from the total income, calculate the payable tax, and send an e-mail to the client with details. We have to gather the annual income, medical expense, house loan premium, life insurance details, provident fund deposit, and apply the following rule to calculate the taxable income: Up to USD 100,000 is not taxable and can be invested as medical insurance, provident fund, or house loan principal payment Up to USD 150,000 can be used as house rent or house loan interest Now we will build a TaxConsultant application using the outside-in style. Following are the steps: Create a new test class com.packtpub.chapter04.outside. in.TaxConsultantTest under the test source folder. We need to perform three tasks; that are, calculate the taxable income, the payable tax, and send an e-mail to a client. We will create a class TaxConsultant to perform these tasks. We will be using Mockito to mock out external behavior. Add a test to check that when a client has investment, then our consultant deducts an amount from the total income and calculates the taxable income. Add a test when_deductable_present_then_taxable_income_is_less_than_the_total_income() to verify it so that it can calculate the taxable income: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxConsultant consultant = new TaxConsultant();} Add the class under the src source folder. Now we have to pass different amounts to the consultant. Create a method consult() and pass the following values: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxConsultant consultant = new TaxConsultant();double totalIncome = 1200000;double homeLoanInterest = 150000;double homeLoanPrincipal =20000;double providentFundSavings = 50000;double lifeInsurancePremium = 30000;consultant.consult(totalIncome,homeLoanInterest,homeLoanPrincipal,providentFundSavings,lifeInsurancePremium);} In the outside-in approach, we mock out objects with interesting behavior. We will mock out taxable income behavior and create an interface named TaxbleIncomeCalculator. This interface will have a method to calculate the taxable income. We read that a long parameter list is code smell; we will refactor it later: public interface TaxbleIncomeCalculator {double calculate(double totalIncome, double homeLoanInterest,double homeLoanPrincipal, double providentFundSavings, doublelifeInsurancePremium);} Pass this interface to TaxConsultant as the constructor argument: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxbleIncomeCalculator taxableIncomeCalculator = null;TaxConsultant consultant = new TaxConsultant(taxableIncomeCalculator); We need a tax calculator to verify that behavior. Create an interface called TaxCalculator: public interface TaxCalculator {double calculate(double taxableIncome);} Pass this interface to TaxConsultant: TaxConsultant consultant = new TaxConsultant(taxableIncomeCalculator,taxCalculator); Now, time to verify the collaboration. We will use Mockito to create mock objects from the interfaces. For now, the @Mock annotation creates a proxy mock object. In the setUp method, we will use MockitoAnnotations.initMocks(this); to create the objects: @Mock TaxbleIncomeCalculator taxableIncomeCalculator;@Mock TaxCalculator taxCalculator;TaxConsultant consultant;@Beforepublic void setUp() {MockitoAnnotations.initMocks(this);consultant= new TaxConsultant(taxableIncomeCalculator,taxCalculator);} Now in test, verify that the consultant class calls TaxableIncomeCalculator and TaxableIncomeCalculator makes a call to TaxCalculator. Mockito has the verify method to test that: verify(taxableIncomeCalculator, only())calculate(eq(totalIncome), eq(homeLoanInterest),eq(homeLoanPrincipal), eq(providentFundSavings),eq(lifeInsurancePremium));verify(taxCalculator,only()).calculate(anyDouble()); Here, we are verifying that the consultant class delegates the call to mock objects. only() checks that the method was called at least once on the mock object. eq() checks that the value passed to the mock object's method is equal to some value. Here, the test will fail since we don't have the calls. We will add the following code to pass the test: public class TaxConsultant {private final TaxbleIncomeCalculator taxbleIncomeCalculator;private final TaxCalculator calculator;public TaxConsultant(TaxbleIncomeCalculatortaxableIncomeCalculator, TaxCalculator calc) {this.taxbleIncomeCalculator =taxableIncomeCalculator;this.calculator = calc;}public void consult(double totalIncome, double homeLoanInterest,double homeLoanPrincipal, double providentFundSavings, doublelifeInsurancePremium) {double taxableIncome = taxbleIncomeCalculator.calculate(totalIncome,homeLoanInterest, homeLoanPrincipal,providentFundSavings,lifeInsurancePremium);double payableTax= calculator.calculate(taxableIncome);}} The test is being passed; we can now add another delegator for e-mail and call it EmailSender. Our facade class is ready. Now we need to use TDD for each interface we extracted. Similarly, we can apply TDD for TaxableIncomeCalculator and EmailSender. Summary This article provided an overview of classical and mockist TDD. In classical TDD, real objects are used and integrated, and mocks are only preferred if a real object is not easy to instantiate. In mockist TDD, mocks have higher priority than real objects. Resources for Article: Further resources on this subject: Important features of Mockito [Article] Testing your App [Article] Mocking static methods (Simple) [Article]
Read more
  • 0
  • 0
  • 1134
article-image-overview-process-management-microsoft-visio-2013
Packt
20 Nov 2013
6 min read
Save for later

Overview of Process Management in Microsoft Visio 2013

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) When Visio was first conceived of over 20 years ago, its first stated marketing aim was to outsell ABC Flowcharter, the best-selling process diagramming tool at the time. Therefore, Visio had to have all of the features from the start that are core in the creation of flowcharts, namely the ability to connect one shape to another and to have the lines route themselves around shapes. Visio soon achieved its aim, and looked for other targets to reach. So, process flow diagrams have long been a cornerstone of Visio's popularity and appeal and, although there have been some usability improvements over the years, there have been few enhancements to turn the diagrams into models that can be managed efficiently. Microsoft Visio 2010 saw the introduction of two features, structured diagrams and validation rules, that make process management achievable and customizable, and Microsoft Visio 2013 sees these features enhanced. In this article, you will be introduced to the new features that have been added to Microsoft Visio to support structured diagrams and validation. You will see where Visio fits in the Process Management stack, and explore the relevant out of the box content. Exploring the new process management features in Visio 2013 Firstly, Microsoft Visio 2010 introduced a new Validation API for structured diagrams and provided several examples of this in use, for example with the BPMN (Business Process Modeling Notation) Diagram and Microsoft SharePoint Workflow templates and the improvements to the Basic Flowchart and Cross-Functional Flowchart templates, all of which are found in the Flowchart category. Microsoft Visio 2013 has updated the version of BPMN from 1.1 to 2.0, and has introduced a new SharePoint 2013 Workflow template, in addition to the 2010 one. Templates in Visio consist of a predefined Visio document that has one or more pages, and may have a series of docked stencils (usually positioned on the left-hand side of workspace area). The template document may have an associated list of add-ons that are active while it is in use, and, with Visio 2013 Professional edition, an associated list of structured diagram validation rulesets as well. Most of the templates that contain validation rules in Visio 2013 are in the Flowchart category, as seen in the following screenshot, with the exception being the Six Sigma template in the Business category. Secondly, the concept of a Subprocess was introduced in Visio 2010. This enables processes to hyperlink to other pages describing the subprocesses in the same document, or even across documents. This latter point is necessary if subprocesses are stored in a document library, such as Microsoft SharePoint. The following screenshot illustrates how an existing subprocess can be associated with a shape in a larger process, selecting an existing shape in the diagram, before selecting the existing page that it links to from the drop-down menu on the Link to Existing button. In addition, a subprocess page can be created from an existing shape, or a selection of shapes, in which case they will be moved to the newly-created page. There were also a number of ease-of-use features introduced in Microsoft Visio 2010 to assist in the creation and revision of process flow diagrams. These include: Easy auto-connection of shapes Aligning and spacing of shapes Insertion and deletion of connected shapes Improved cross-functional flowcharts Subprocesses An infinite page option, so you need not go over the edge of the paper ever again Microsoft Visio 2013 has added two more notable features: Commenting (a replacement for the old reviewer's comments) Co-authoring However, this book is not about teaching the user how to use these features, since there will be many other authors willing to show you how to perform tasks that only need to be explained once. This book is about understanding the Validation API in particular, so that you can create, or amend, the rules to match the business logic that your business requires. Reviewing Visio Process Management capabilities Microsoft Visio now sits at the top of the Microsoft Process Management Product Stack, providing a Business Process Analysis (BPA) or Business Process Modeling (BPM) tool for business analysts, process owners/participants, and line of business software architects/developers. Understanding the Visio BMP Maturity Model If we look at the Visio BPM Maturity Model that Microsoft has previously presented to its partners, then we can see that Visio 2013 has filled some of the gaps that were still there after Visio 2010. However, we can also see that there are plenty of opportunities for partners to provide solutions on top of the Visio platform. The maturity model shows how Visio initially provided the means to capture paper-drawn business processes into electronic format, and included the ability to encapsulate data into each shape and infer the relationship and order between elements through connectors. Visio 2007 Professional added the ability to easily link shapes, which represent processes, tasks, decisions, gateways, and so on with a data source. Along with that, data graphics were provided to enable shape data to be displayed simply as icons, data bars, text, or to be colored by value. This enriched the user experience and provided quicker visual representation of data, thus increasing the comprehension of the data in the diagrams. Generic templates for specific types of business modeling were provided. Visio had a built-in report writer for many versions, which provided the ability to export to Excel or XML, but Visio 2010 Premium introduced the concept of validation and structured diagrams, which meant that the information could be verified before exporting. Some templates for specific types of business modeling were provided. Visio 2010 Premium also saw the introduction of Visio Services on SharePoint that provided the automatic (without involving the Visio client) refreshing of data graphics that were linked to specific types of data sources. Throughout this book we will be going into detail about Level 5 (Validation) in Visio 2013, because it is important to understand the core capabilities provided in Visio 2013. We will then be able to take the opportunity to provide custom Business Rule Modeling and Visualization. Reviewing the foundations of structured diagramming A structured diagram is a set of logical relationships between items, where these relationships provide visual organization or describe special interaction behaviors between them. The Microsoft Visio team analyzed the requirements for adding structure to diagrams and came up with a number of features that needed to be added to the Visio product to achieve this: Container Management: The ability to add labeled boxes around shapes to visually organize them Callout Management: The ability to associate callouts with shapes to display notes List Management: To provide order to shapes within a container Validation API: The ability to test the business logic of a diagram Connectivity API: The ability to create, remove, or traverse connections easily The following diagram demonstrates the use of Containers and Callouts in the construction of a basic flowchart, that has been validated using the Validation API, which in turn uses the Connectivity API.
Read more
  • 0
  • 0
  • 5194

article-image-automated-testing-using-robotium
Packt
19 Nov 2013
10 min read
Save for later

Automated testing using Robotium

Packt
19 Nov 2013
10 min read
(For more resources related to this topic, see here.) Robotium framework Robotium is an open source automation testing framework that is used to write a robust and powerful black box for Android applications (the emphasis is mostly on black box test cases). It fully supports testing for native and hybrid applications. Native apps are live on the device, that is, designed for a specific platform and can be installed from the Google Play Store, whereas Hybrid apps are partly native and partly web apps. These can also be installed from the app store, but require the HTML to be rendered in the browser. Robotium is mostly used to automate UI test cases and internally uses run-time binding to Graphical User Interface (GUI) components. Robotium is released under the Apache License 2.0. It is free to download and can be easily used by individuals and enterprises and is built on Java and JUnit 3. It will be more appropriate to call Robotium an extension of the Android Test Unit Framework, available at http://developer.android.com/tools/testing/testing_android.html. Robotium can also work without the application, under the test's source code. The test cases written using Robotium can either be executed on the Android Emulator (Android Virtual Device (AVD))—we will see how to create an AVD during installation in the following section—or on a real Android device. Developers can write function, system, and acceptance test scenarios across multiple activities. It is currently the world's leading Automation Testing Framework, and many open source developers are contributing to introduce more and more exciting features in subsequent releases. The following screenshot is of the git repository website for the Robotium project: As Robotium is an open source project, anyone can contribute for the purpose of development and help in enhancing the framework with many more features. The Robotium source code is maintained at GitHub and can be accessed using the following link: https://github.com/jayway/robotium You just need to fork the project. Make all your changes in a clone project and click on Pull Request on your repository to tell core team members which changes to bring in. If you are new to the git environment, you can refer to the GitHub tutorial at the following link: https://help.github.com/ Robotium is like Selenium but for Android. This project was started in January 2010 by Renas Reda. He is the founder and main developer for Robotium. The project initiated with v1.0 and continues to be followed up with new releases due to new requirements. It has support for Android features such as activities, toasts, menus, context menus, web views, and remote controls. Let's see most of the Robotium features and benefits for Android test case developers. Features and benefits Automated testing using Robotium has many features and benefits. The triangularization workflow diagram between the user, Robotium, and the Android device clearly explains use cases between them: The features and benefits of Robotium are as follows: Robotium helps us to quickly write powerful test cases with minimal knowledge of the application under test. Robotium offers APIs to directly interact with UI controls within the Android application such as EditText, TextView, and Button. Robotium officially supports Android 1.6 and above versions. The Android platform is not modified by Robotium. The Robotium test can also be executed using command prompt. Robotium can be integrated smoothly with Maven or Ant. This helps to add Robotium to your project's build automation process. Screenshots can be captured in Robotium (an example screenshot is shown as follows): The test application project and the application project run on the same JVM, that is, Dalvik Virtual Machine (DVM). It's possible to run Robotium without a source code. Robotium can work with other code coverage measurement tools, such as Cobertura and Emma. Robotium can detect the messages that are shown on the screen (Toasts). Robotium supports Android features such as activities, menu, and context menu. Robotium automated tests can be implemented quickly. Robotium is built on JUnit, because of which it inherits all JUnit's features. The Robotium framework automatically handles multiple activities in an Android application. Robotium test cases are prominently readable, in comparison to standard instrumentation tests. Scrolling activity is automatically handled by the Robotium framework. Recent versions of Robotium support hybrid applications. Hybrid applications use WebViews to present the HTML and JavaScript files in full screen, using the native browser rendering engine. API set Web support has been added to the Robotium framework since Robotium 4.0 released. Robotium has full support for hybrid applications. There are some key differences between native and hybrid applications. Let's go through them one by one, as follows: Native Application Hybrid Application Platform dependent Cross platform Run on the device's internal software and hardware Built using HTML5 and JavaScript and wrapped inside a thin native container that provides access to native platform features Need more developers to build apps on different platforms and learning time is more Save development cost and time Excellent performance Less performance The native and hybrid applications are shown as follows: Let's see some of the existing methods in Robotium that support access to web content. They are as follows: searchText (String text) scrollUp/Down () clickOnText (String text) takeScreenshot () waitForText (String text) In the methods specifically added for web support, the class By is used as a parameter. It is an abstract class used as a conjunction with the web methods. These methods are used to select different WebElements by their properties, such as ID and name. The element used in a web view is referred to as a WebElement. It is similar to the WebDriver implemented in Selenium. The following table lists all the methods inside the class By: Method Description className (String className) Select a WebElement by its class name cssSelector (String selectors) Select a WebElement by its CSS selector getValue () Return the value id (String id) Select a WebElement by its id name (String name) Select a WebElement by its name tagName (String tagName) Select a WebElement by its tag name textContent (String textContent) Select a WebElement by its text content xpath (String xpath) Select a WebElement by its xpath Some of the important methods in the Robotium framework, that aim at direct communication with web content in Android applications, are listed as follows: clickOnWebElement(By by): It clicks on the WebElement matching the specified By class object. waitForWebElement(By by): It waits for the WebElement matching the specified By class object. getWebElement(By by, int index): It returns a WebElement matching the specified By class object and index. enterTextInWebElement(By by, String text): It enters the text in a WebElement matching the specified By class object. typeTextInWebElement(By by): It types the text in a WebElement matching the specified By class object. In this method, the program actually types the text letter by letter using the keyboard, whereas enterTextInWebElement directly enters the text in the particular. clearTextInWebElement(By by): It clears the text in a WebElement matching the specified By class object. getCurrentWebElements(By by): It returns the ArrayList of WebElements displayed in the active web view matching the specified By class object. Before actually looking into the hybrid test example, let's gain more information about WebViews. You can get an instance of WebView using the Solo class as follows: WebView wb = solo.getCurrentViews(WebView.class).get(0); Now that you have control of WebView, you can inject your JavaScript code as follows: Wb.loadUrl("<JavaScript>"); This is very powerful, as we can call every function on the current page; thus, it helps automation. Robotium Remote Control using SAFS SAFS tests are not wrapped up as JUnit tests and the SAFS Remote Control of Robotium uses an implementation that is NOT JUnit based. Also, there is no technical requirement for a JUnit on the Remote-Control side of the test. The test setup and deployment of the automation of the target app can be achieved using the SDK tools. These tools are used as part of the test runtime such as adb and aapt. The existing packaging tools can be used to repackage a compiled Robotium test with an alternate AndroidManifest.xml file, which can change the target application at runtime. SAFS is a general-purpose, data-driven framework. The only thing that should be provided by the user is the target package name or APK path arguments. The test will extract and redeploy the modified packages automatically and then launch the actual test. Traditional JUnit/Robotium users might not have, or see the need for, this general-purpose nature, but that is likely because it was necessary for the previous Android tests to be JUnit tests. It is required for the test to target one specific application. The Remote Control application is application specific. That's why the test app with the Remote Control installed in the device no longer needs to be an application. The Remote Control in Robotium means there are two test applications to build for any given test. They are as follows: Traditional on-device Robotium/JUnit test app Remote Control app These two build projects have entirely different dependencies and build scripts. The on-device test app has the traditional Robotium/Android/JUnit dependencies and build scripts, while the Remote Control app only has dependencies on the TCP sockets for communications and Robotium Remote Control API. The implementation for the remote-controlled Robotium can be done in the following two pieces: On Device: ActivityInstrumentationTestCase2.setup() is initialized when Robotium's Solo class object is to be used for the RobotiumTestRunner (RTR). The RTR has a Remote Control Listener and routes remote control calls and data to the appropriate Solo class methods and returns any results, as needed, to the Remote Controller. The on-device implementation may exploit test-result asserts if that is desirable. Remote Controller: The RemoteSolo API duplicates the traditional Solo API, but its implementation largely pushes the data through the Remote Control to the RTR, and then receives results from the Remote Controller. The Remote Control implementation may exploit any number of options for asserting, handling, or otherwise reporting or tracking the test results for each call. As you can see, the Remote-Control side only requires a RemoteSolo API without any specific JUnit context. It can be wrapped in a JUnit context if the tester desires it, but it is not necessary to be in a JUnit context. The sample code and installation of Robotium Remote Control can be accessed in the following link: http://code.google.com/p/robotium/wiki/RemoteControl Summary Thus this article introduced us to the Robotium framework, its different features, its benefits in the world of automated testing, the API set of the Robotium Framework, and how to implement the Robotium Remote Control using SAFS. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 3103

article-image-spatial-data-services
Packt
19 Nov 2013
6 min read
Save for later

Spatial Data Services

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) So far we have worked with relatively small sets of data; for larger collections, Bing Maps provide Spatial Data Services. They offer the Data Source Management API to load large datasets into Bing Maps servers, and the Query API to query the data. In this article, we will use the Geocode Dataflow API, which provides geocoding and reverse geocoding of large datasets. Geocoding is the process of finding geographic coordinates from other geographic data, such as, street addresses or postal codes. Reverse geocoding is the opposite process, where the coordinates are used to find their associated textual locations, such as, addresses and postal codes. Bing Maps implement these processes by creating jobs on Bing Maps servers, and querying them later. All the process can be automated, which is ideal for huge amounts of data. Please note that strict rules of data usage apply to the Spatial Data Services (please refer http://msdn.microsoft.com/en-us/library/gg585136.aspx for full details). At the moment of writing, a user with a basic account can set up to 5 jobs in a 24-hour period. Our task in this article is to geocode the addresses of ten of the biggest technology companies in the world, such as Microsoft, Google, Apple, Facebook, and so on, and then display them on the map. The first step is to prepare the file with the companies' addresses. Geocoding dataflow input data The input and output data can be supplied in the following formats: XML (content type application/xml) Comma separated values (text/plain) Tab-delimited values (text/plain) Pipe-delimited values (text/plain) Binary (application/octet-stream) used with Blob Service REST API We will use the XML format, for its clearer declarative structure. Now, let's open Visual Studio and create a new C# Console project named LBM.Geocoder. We then add a Datafolder, which will contain all the data files and samples with which we'll work in this article, starting with the data.xml file we need to upload to the Spatial Data servers to be geocoded. <?xml version="1.0" encoding="utf-8"?><GeocodeFeedVersion="2.0"><GeocodeEntity Id="001"><GeocodeRequest Culture="en-US" IncludeNeighborhood="1"><Address AddressLine="1 Infinite Loop"AdminDistrict="CA" Locality="Cupertino" PostalCode="95014" /></GeocodeRequest></GeocodeEntity><GeocodeEntity Id="002"><GeocodeRequest Culture="en-US" IncludeNeighborhood="1"><Address AddressLine="185 Berry St"AdminDistrict="NY" Locality="New York" PostalCode="10038"/></GeocodeRequest></GeocodeEntity> The listing above is a fragment of that file with the addresses of the companies' headquarters. Please note that the more addressing information we provide to the API, the better quality geocoding we receive. In production, this file would be created programmatically, probably based on an Addresses database. The ID of GeocodeEntity, could also be stored, so that the data is matched easier once fetched from the servers. (You can find the Geocode Dataflow Data Schema, Version 2, at http://msdn.microsoft.com/en-us/library/jj735477.aspx.) The job Let's add a Jobclass to our project: public class Job{private readonly string dataFilePath;public Job(string dataFilePath){this.dataFilePath = dataFilePath;}} The dataFilePath argument is the path to the data.xml file we created earlier. Creating the job is as easy as calling a REST URL: public void Create(){var uri =String.Format("{0}?input=xml&output=xml&key={1}",Settings.DataflowUri, Settings.Key);var data = File.ReadAllBytes(dataFilePath);try{var wc = new WebClient();wc.Headers.Add("Content-Type", "application/xml");var receivedBytes = wc.UploadData(uri, "POST",data);ParseJobResponse(receivedBytes);}catch (WebException e){var response = (HttpWebResponse)e.Response;var status = response.StatusCode;}} We place all the API URLs and other settings in the Settings class: public class Settings{public static string Key = "[YOUR BING MAPS KEY];public static string DataflowUri ="https://spatial.virtualearth.net/REST/v1/dataflows/geocode";public static XNamespace XNamespace ="http://schemas.microsoft.com/search/local/ws/rest/v1";public static XNamespace GeocodeFeedXNamespace ="http://schemas.microsoft.com/search/local/2010/5/geocode";} To create the job, we need to build the Dataflow URL template with a Bing Maps Key, and parameters such as input and output formats. We specify the latter to be XML. Next, we use a WebClient instance to load the data with a POST protocol. Then, we parse the server response: private void ParseJobResponse(byte[] response){using (var stream = new MemoryStream(response)){var xDoc = XDocument.Load(stream);var job = xDoc.Descendants(Settings.XNamespace +"DataflowJob").FirstOrDefault();var linkEl = job.Element(Settings.XNamespace +"Link");if (linkEl != null) Link = linkEl.Value;}} Here, we pass the stream created with the bytes received from the server to the XDocument.load method. This produces an XDocument instance, which we will use to extract the data we need. We will apply a similar process throughout the article to parse XML content. Note that the appropriate XNamespace needs to be supplied in order to navigate through the document nodes. You can find a sample of the response inside the Data folder (jobSetupResponse.xml), which shows that the link to the job created is found under a Link element within the DataflowJob node. Getting job status Once we have set up a job, we can store the link on a data store, such as a database, and check for its status later. The data will be available on the Microsoft servers up to 14 days after creation. Let's see how we can query the job status: public static JobStatus CheckStatus(string jobUrl){var result = new JobStatus();var uri = String.Format("{0}?output=xml&key={1}", jobUrl,Settings.Key);var xDoc = XDocument.Load(uri);var job = xDoc.Descendants(Settings.XNamespace +"DataflowJob").FirstOrDefault();if (job != null){var linkEls = job.Elements(Settings.XNamespace +"Link").ToList();foreach (var linkEl in linkEls){var nameAttr = linkEl.Attribute("name");if (nameAttr != null){if (nameAttr.Value == "succeeded") result.SucceededLink = linkEl.Value;if (nameAttr.Value == "failed") result.FailedLink= linkEl.Value;}}var statusEl = job.Elements(Settings.XNamespace +"Status").FirstOrDefault();if (statusEl != null) result.Status = statusEl.Value;}return result;} Now, we know that to query a Data API we need to first build the URL template. We do this by attaching the Bing Maps Key and an output parameter to the job link. The response we get from the server, stores the job status within a Link element of a DataflowJob node (the jobResponse.xml file inside the Data folder contains an example). The link we need has a name attribute with the value succeeded. Summary When it comes to large amounts of data, the Spatial Data Services offer a number of interfaces to store, and query user data; geocode addresses or reverse geocode geographical coordinates. The services perform these tasks by means of background jobs, which can be set up and queried through REST URLs. Resources for Article: Further resources on this subject: NHibernate 2: Mapping relationships and Fluent Mapping [Article] Using Maps in your Windows Phone App [Article] What is OpenLayers? [Article]
Read more
  • 0
  • 0
  • 1653
Packt
19 Nov 2013
6 min read
Save for later

Code interlude – signals and slots

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) Qt offers a better way: signals and slots. Like an event, the sending component generates a signal—in Qt parlance, the object emits a signal—which recipient objects may receive in a slot for the purpose. Qt objects may emit more than one signal, and signals may carry arguments; in addition, multiple Qt objects can have slots connected to the same signal, making it easy to arrange one-to-many notifications. Equally important, if no object is interested in a signal, it can be safely ignored, and no slots connected to the signal. Any object that inherits from QObject, Qt's base class for objects, can emit signals or provide slots for connection to signals. Under the hood, Qt provides extensions to C++ syntax for declaring signals and slots. A simple example will help make this clear. The classic example you find in the Qt documentation is an excellent one, and we'll use it again it here, with some extension's. Imagine you have the need for a counter, that is, a container that holds an integer. In C++, you might write: class Counter { public: Counter() { m_value = 0; } int value() const { return m_value; } void setValue(int value); private: int m_value; }; The Counter class has a single private member, m_value, bearing its value. Clients can invoke the value to obtain the counter's value, or set its value by invoking setValue with a new value. In Qt, using signals and slots, we write the class this way: #include <QObject> class Counter : public QObject { Q_OBJECT public: Counter() { m_value = 0; } int value() const { return m_value; } public slots: void setValue(int value); void increment(); void decrement(); signals: void valueChanged(int newValue); private: int m_value; }; This Counter class inherits from QObject, the base class for all Qt objects. All QObject subclasses must include the declaration Q_OBJECT as the first element of their definition; this macro expands to Qt code implementing the subclass-specific glue necessary for the Qt object and signal-slot mechanism. The constructor remains the same, initializing our private member to zero. Similarly, the accessor method value remains the same, returning the current value for the counter. An object's slots must be public, and are declared using the Qt extension to C++ public slots. This code defines three slots: a setValue slot, which accepts a new value for the counter, and the increment and decrement slots, which increment and decrement the value of the counter. Slots may take arguments, but do not return them; the communication between a signal and its slots is one way, initiating with the signal and terminating with the slot(s) connected to the signal. The counter offers a single signal. Like slots, signals are also declared using a Qt extension to C++, signals. In the example above, a Counter object emits the signal valueChanged with a single argument, which is the new value of the counter. A signal is a function signature, not a method; Qt's extensions to C++ use the type signature of signals and slots to ensure type safety between signal-slot connections, a key advantage signals and slots have over other decoupled messaging schemes. As the developers, it's our responsibility to implement each slot in our class with whatever application logic makes sense. The Counter class's slots look like this: void Counter::setValue(int newValue) { if (newValue != m_value) { m_value = newValue; emit valueChanged(newValue); } } void Counter::increment() { setValue(value() + 1); } void Counter::decrement() { setValue(value() – 1); } We use the implementation of the setValue slot as a method, which is what all slots are at their heart. The setValue slot takes a new value and assigns the new value to the Counter class's private member variable if they aren't the same. Then, the signal emits the valueChanged signal, using the Qt extension emit, which triggers an invocation to the slots connected to the signal. This is a common pattern for signals that handle object properties: testing the property to be set for equality with the new value, and only assigning and emitting a signal if the values are unequal. If we had a button, say QPushButton, we could connect its clicked signal to the increment or decrement slot, so that a click on the button incremented or decremented the counter. I'd do that using the QObject::connect method, like this: QPushButton* button = new QPushButton(tr("Increment"), this); Counter* counter = new Counter(this); QObject::connect(button, SIGNAL(clicked(void)), Counter, SLOT(increment(void)); We first create the QPushButton and Counter objects. The QPushButton constructor takes a string, the label for the button, which we denote to be the string Increment or its localized counterpart. Why do we pass this to each constructor? Qt provides a parent-child memory management between QObjects and their descendants, easing clean-up when you're done using an object. When you free an object, Qt also frees any children of the parent object, so you don't have to. The parent-child relationship is set at construction time; I'm signaling to the constructors that when the object invoking |this code is freed, the push button and counter may be freed as well. (Of course, the invoking method must also be a subclass of QObject for this to work.) Next, I call QObject::connect, passing first the source object and the signal to be connected, and then the receiver object and the slot to which the signal should be sent. The types of the signal and the slot must match, and the signals and slots must be wrapped in the SIGNAL and SLOT macros, respectively. Signals can also be connected to signals, and when that happens, the signals are chained and trigger any slots connected to the downstream signals. For example, I could write: Counter a, b; QObject::connect(&a, SIGNAL(valueChanged(int)), &b, SLOT(setValue(int))); This connects the counter b with the counter a, so that any change in value to the counter a also changes the value of the counter b. Signals and slots are used throughout Qt, both for user interface elements and to handle asynchronous operations, such as the presence of data on network sockets and HTTP transaction results. Under the hood, signals and slots are very efficient, boiling down to function dispatch operations, so you shouldn't hesitate to use the abstraction in your own designs. Qt provides a special build tool, the meta-object compiler, which compiles the extensions to C++ that signals and slots require and generates the additional code necessary to implement the mechanism. Summary In this article we learned the usage events for the purpose of coupling different objects; components offering data encapsulate that data in an event, and an event loop (or, more recently, an event listener) catches the event and performs some action. Resources for Article: Further resources on this subject: One-page Application Development [Article] Android 3.0 Application Development: Multimedia Management [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 1097

article-image-dom-and-qtp
Packt
15 Nov 2013
7 min read
Save for later

DOM and QTP

Packt
15 Nov 2013
7 min read
(For more resources related to this topic, see here.) QuickTest Object property for a Web object allows us to get a reference to the DOM object, and can perform any operation on a DOM object. For example, the following code shows that the object on the page allows retrieving the element by name username, that is, the textbox in the next step assigns the value as ashish. Set Obj = Browser("Tours").Page("Tours").Object.getElementsByName("userName) 'Get the length of the objects Print obj.length ' obj(0).value="ashish" The following code snippet shows various operations that can be performed using the Object property of the web object: 'Retrieve all the link elements in a web page Set links = Browser ("Mercury Tours").Page("Mercury Tours").Object.links 'Length property provide total number of links For i =0 to links.length -1 Print links(i).toString() 'Print the value of the links Next Get the web edit object and set the focus on it Set MyWebEdit = Browser("Tours").Page("Tours").WebEdit("username"). Object MyWebEdit.focus 'Retrieve the html element by its name obj = Browser("Tours").Page("Tours").Object. getElementsByName("userName" ) 'set the value to the retrieved object obj.value ="ashish" 'Retrieve the total number of the images in the web pages set images = Browser("Mercury Tours").Page("Mercury Tours").Object.images print images.length 'Clicking on the Image obj = Browser("Tours").Page("Mercury Tours").Object.getElementsByName("login") obj.click 'Selecting the value from the drop down using selectedIndex method Browser("Flight").Page("Flight).WebList("pass.0.meal").Object. selectedIndex =1 'Click on the check box Browser("Flight").Page("Flight").WebCheckBox("ticketLess").Object. click Firing an event QTP allows firing of the events on the web objects: Browser("The Fishing Website fishing").Page("The Fishing Website fishing").Link("Link").FireEvent "onmouseover" The following example uses the FireEvent method to trigger the onpropertychange event on a form: Browser("New Page").Page("New Page").WebElement("html tag:=Form").FireEvent "onpropertychange" QTP allows executing JavaScript code. There are two JavaScript functions that allow us to interact with web pages. We can retrieve objects and perform the actions on them or we can retrieve the properties from the element on the pages: RunScript executes the JavaScript code, passed as an argument to this function. The following example shows how the RunScript method calls the ImgCount method, which returns the number of images in the page: length = Browser("Mercury Tours").Page("Mercury Tours").RunScript("ImgCount(); function ImageCount() {var list = document.getElementsByTagName('img'); return list.length;}") print "The total number of images on the page is " & length RunScriptsFormFile uses the full path of the JavaScript files to execute it. The location can be an absolute or relative filesystem path or a quality center path. The following is a sample JavaScript file (logo.js): var myNode = document.getElementById("lga"); myNode.innerHTML = ''; Use the logo.js file, as shown in the following code: Browser("Browser").Page("page").RunScriptFromFile "c:logo.js" 'Check that the Web page behaves correctly If Browser("Browser").Page("page").Image("Image").Exist Then Reporter.ReportEvent micFail, "Failed to remove logo" End If The above example uses the RunScriptFromFile method to remove a DOM element from a web page and checks if the page still behaves correctly when the DOM element has been removed. Using XPath XPath allows navigating and finding elements and attributes in an HTML document. XPath uses path expressions to navigate in HTML documents. QTP allows XPath to create the object description; for example: xpath:=//input[@type='image' and contains(@name,'findFlights') In the following section, we will learn the various XPath terminologies and methodologies to find the objects using XPath. XPath terminology XPath uses various terms to define elements and their relationships among HTML elements, as shown in the following table: Atomic values Atomic values are nodes with no children or parent Ancestors A node's parent, parent's parent, and so on Descendants A node's children, children's children, and so on Parent Each element and attribute has one parent Children Element nodes may have zero, one, or more children Siblings Nodes that have the same parent Selecting nodes A path expression allows selecting nodes in a document. The commonly used path expressions are shown in the following table: Symbol Meaning /(slash) Select elements relative to the root node //(double slash) Select nodes in the document from the current node that match the selection irrespective of its position .(dot) Represents the current node .. Represents the parent of the current node @ Represents an attribute nodename Selects all nodes with the name "nodename" Slash (/) is used in the beginning and it defines an absolute path; for example, /html/head/title returns the title tag. It defines ancestor and descendant relationships if used in the middle; for example, //div/table that returns the div containing a table. Double slash (//) is used to find a node in any location; for example, //table returns all the tables. It defines a descendant relationship if used in the middle; for example, /html//title returns the title tag, which is descendant of the html tag. Refer to the following table to see a few more examples with their meanings: Expression Meaning //a Find all anchor tags //a//img List the images that are inside a link //img/@alt Show all the alt tags //a/@href Show the href attribute for every link //a[@*] Anchor tab with any attribute //title/text() or /html/head/title/text() Get the title of a page //img[@alt] List the images that have alt tags //img[not(@alt)] List the images that don't have alt tags //*[@id='mainContent'] Get an element with a particular CSS ID //div [not(@id="div1")] Make an array element from the XPath //p/.. Selects the parent element of p (paragraph) child XXX[@att] Selects all the child elements of XXX with an attribute named att ./@* for example, //script/./@* Finds all attribute values of current element Predicates A predicate is embedded in square brackets and is used to find out specific node(s) or a node that contains a specific value: //p[@align]: This allows finding all the tags that have align attribute value as center //img[@alt]: This allows finding all the img (image) tags that contain the alt tag //table[@border]: This allows finding all the table tags that contain border attributes //table[@border >1]: This finds the table with border value greater than 1 Retrieve the table row using the complete path: //body/div/table/tbody/tr[1] Get the name of the parent of //body/div/table/.. (parent of the table tag) //body/div/table/..[name()] Path expression Result //div/p[1] Selects the first paragraph element that is the child of the div element //div/p [last()] Selects the last paragraph element that is the child of the div element //div/p[last()-1] Selects the second last paragraph element that is the child of the div element //div/p[position()<3] Selects the first two paragraph elements that are children of the div element //script[@language] Selects all script element(s) with an attribute named as language //script[@language='javascript'] Selects all the script elements that have an attribute named language with a value of JavaScript //div/p[text()>45.00] Selects all the paragraph elements of the div element that have a text element with a value greater than 45.00 Selecting unknown nodes Apart from selecting the specific nodes in XPath, XPath allows us to select the group of HTML elements using *, @, and node() functions. * represents an element node @ represents the attribute node() represents any node The previous mentioned elements allow selecting the unknown nodes; for example: /div/* selects all the child nodes of a div element //* selects all the elements in a document //script[@*] selects all the title elements which contain attributes
Read more
  • 0
  • 0
  • 5027