Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-hpc-cluster-computing
Packt
10 Jul 2017
8 min read
Save for later

HPC Cluster Computing

Packt
10 Jul 2017
8 min read
In this article by, Raja Malleswara Rao Pattamsetti, the author of the book Distributed Computing in Java 9, the author is going to look into the processing capabilities for organizational applications are more than a chronological computer configuration that can be addressed to an extent by increasing the processor capacity and other resource allocation. While this can alleviate the performance for a while, the future computational requirements are restricted for adding more powerful computational processors and cost incurred in producing such powerful systems. Also, there is a need to produce efficient algorithms and practices to produce the best result out of it. A practical and economic substitute for these single high-power computers is to establish multiple low-power capacity processors that work collectively and organize their processing capabilities, which results in a powerful system, that is, parallel computers that permit the processing activities to be distributed among multiple low-capacity computers and together obtain the best expected result.   In this article, we will cover the following topics: Era of Computing Commanding Parallel System Architectures MPP: Massively Parallel Processors SMP: Symmetric Multiprocessors CC-NUMA: Cache Coherent Nonuniform Memory Access Distributed Systems Clusters Java support for High-Performance Computing (For more resources related to this topic, see here.) Era of Computing The technological developments are rapid in the industry of computing with the help of advancements in system software and hardware. The hardware advancements are around the processor advancements and their making techniques, and high-performance processors are getting generated in amazingly low cost. The hardware advancements are further boosted by the high-bandwidth and low-latency network systems. Very Large Scale Integration (VLSI) as a concept brought several phenomenal advancements in producing commanding chronological and parallel processing computers. Simultaneously, the System Software advancements have improved the ability of operating system, advanced software programming development techniques. It was observed as two commanding computing eras, namely Sequential and Parallel Era of Computing. The following diagram shows the advancements in both the Era of Computing from last few decades to the forecast for next two decades:   In each of these era, it is observed that the hardware architecture growth is trailed by the system software advancements, which mean that as there was more powerful hardware evolved, correspondingly advanced software programs and operating system capacities doubled its strength. As the applications and problem solving environments are added to this with the advent of parallel computing, mini to microprocessor development, clustered computers. Let’s now review some of the commanding parallel system architectures from the last few years. Commanding parallel system architectures From the last few years, multiple varieties of computing models provisioning great processing performance have developed. They are classified depending on the memory allocated to their processors and their alignment are designed. Some of the important parallel systems are as follows: MPP: Massively Parallel Processors SMP: Symmetric Multiprocessors CC-NUMA: Cache Coherent Nonuniform Memory Access Distributed Systems Clusters Let’s now understand a little more detail about each of these system architectures and review some of their important performance characteristics. Massively Parallel Processors (MPP) Massively Parallel Processors (MPP), as the name advises, are a huge parallel processing system developed in no sharing architecture. Such systems usually contain large number of processing nodes that are integrated with a high-speed interconnected network switch. Node is nothing but an element of computer with diverse hardware component combination, usually containing a memory unit and more than on processor. Some of the purpose nodes are designed to have backup disks or additional storage capacity.The following diagram represents the massively parallel processors architecture:   Symmetric Multiprocessors (SMP) Symmetric Multiprocessors (SMP), as the name advises, contain a set of limited number of processors ranging from 2 to 64 processors and share most of the resources among those processors. One instance of the operating system will be operating together on all these connected processors while they commonly share the I/O, memory, and the network bus. Based on the nature of similar set of processors connected and acting together as on operating system is the essential behavior of symmetric multiprocessors.The following diagram depicts the symmetric multiprocessors representation: Cache Coherent Nonuniform Memory Access (CC-NUMA) Cache Coherent Nonuniform Memory Access (CC-NUMA) is a special type of multiple processor system having mountable additional processor capability.  In CC-NUMA, the SMP system and the other remote nodes communicate through the remote interconnection link. Each of the remote nodes contains local memory and processors of its own.The following diagram represents the cache coherent nonuniform memory access architecture: The nature of memory access is nonuniform. Just Like the symmetric multiprocessors, CC-NUMA system is a comprehensive sight to entire system memory, and as the name advises, it takes nonuniform time to access either the close or distant memory locations. Distributed systems Distributed systems, as we have been discussing from previous articles, are traditional individual set of computers interconnected through an Internet/intranet running on their own operating system. Any diverse set of computer systems can contribute to be part of the distributed system with this expectation, including the combinations of Massively Parallel Processors, Symmetric Multiprocessors, distinct computers, and clusters. Clusters Cluster is an assembly of terminals or computers that are assembled through internetwork connection to address a large processing requirement through parallel processing. Hence, the clusters are usually configured with terminals or computers having higher processing capabilities connected with high-speed internetwork. Usually, Cluster is considered as one image that integrates a number of nodes with a group of resource utilization. The following diagram is a sample clustered tomcat server for a typical J2EE web application deployment:   The following table shows some of the important performance characteristics of the different parallel architecture systems discussed so far: Network of workstations A network of workstations is a group of resources connected like system processors, interface for networks, storage, and data disks that open the space for newer combinations such as: Parallel Computing: As discussed in the previous sections, the group of system processor can be connected as MPP or DSM, which can obtain the parallel processing ability. RAM Network: As a number of systems are connected, each system memory can collectively work as DRAM cache, which intensely expand the virtual memory for the entire system to improve its processing ability. Software Redundant Array of Inexpensive Disks (RAID): As a group of systems are used in the network connected in array improve the system stability, availability, and memory capacity with the help of low-cost multiple systems connected in the local area network. This also gives the simultaneous I/O system support. Multipath Communication: Multipath communication is a technique of using more than one network connection between the network of workstations to allow simultaneous information exchange among system nodes.  Java support for High-Performance Computing Java is providing some numerous advantages for HPC (High-Performance Computing) as a programming language, particularly with Grid computing. Some of the key benefits of using Java in HPC include the following: Portability: The capability to write the programming in one platform and port it to run on any other operating platform has been the biggest strength of Java language. This continue to be the advantage when porting Java applications to HPC systems. In Grid computing, this is an essential feature, as the execution environment gets decided during the execution of the program. This is possible since the Java byte code executes in Java Virtual Machine (JVM), which itself acts as an abstract operating environment. Network Centricity: As discussed in previous articles, Java provides a great support for distributed systems with its network centric feature of remote procedure calls (RPC) through RMI and CORBA services. Along with these, Java support for socket programming is another great support for grid computing. Software Engineering: Another great feature of Java is the way it can be used for engineering the solutions. We can produce more object-oriented, loosely coupled software that can be independently added as API through jar files in multiple other systems. Encapsulation and Interface programming makes it a great advantage in such application development. Security: Java is highly secure programming language with its feature like byte code verification to limit the resource utilization by an untrusted intruder software or program. This is a great advantage when running application on distributed environment with remote communication established over shared network. GUI development: Java support for platform independent GUI development is a great advantage to develop and deploy the enterprise web applications over a HPC environment to interact with. Availability: Java is having a great supporting from multiple operating system including Windows, Linux, SGI, and Compaq. It is easily available to consume and develop using a great set of open source frameworks developed on top of it.   While the advantages listed in the previous points in favor of using Java for HPC environments, some concerns need to be reviewed while designing Java applications, including its numerics (complex numbers, fastfp, multidimensional), performance, and parallel computing designs. Summary Through this article, you have learned about era of computing, commanding parallel, system architectures, MPP: Massively Parallel Processors, SMP: Symmetric Multiprocessors, CC-NUMA: Cache Coherent Nonuniform Memory Access, Distributed Systems, Clusters, Java support for High-Performance Computing Resources for Article:  Further resources on this subject: The Amazon S3 SDK for Java [article] Gradle with the Java Plugin [article] Getting Started with Sorting Algorithms in Java [article]
Read more
  • 0
  • 0
  • 3033

article-image-android-uis-custom-views
Packt
10 Jul 2017
11 min read
Save for later

Android UIs with Custom Views

Packt
10 Jul 2017
11 min read
In this article by Raimon Rafols Montane, the author of the book Building Android UIs with Custom Views, we will see the very basics steps we'll need to get ourselves started building Android custom views, where we should use them and where we should simply rely on the Android standard widgets. (For more resources related to this topic, see here.) Why do we need custom views: There are lovely Android applications on Google Play and in other markets such as Amazon, built only using the standard Android UI widgets and layouts. There are also many other applications which have that small additional feature that makes our interaction with them easier or simply more pleasing. There is no magic formula, but maybe by just adding something different, something that the user feels like "hey it's not just another app for" might increase our user retention. It might not be the deal breaker but can definitely make the difference sometimes. Few years ago, author was working on the pre-sales department of a mobile app development agency and when the app Path was launched to the market, we got many and many requests from our customers to build menus like the Path app menu. It wasn't a critical feature for the applications they were building, but at that point it was a cool thing and many other applications and products wanted to have something similar. Even if it was a simple detail, it made the Path application special at that time. It wasn't the case at that time, but nowadays there are many implementations of that menu published in GitHub as open source, although they're not really maintained anymore. https://github.com/daCapricorn/ArcMenu https://github.com/oguzbilgener/CircularFloatingActionMenu. One of the main reasons to create our own custom views to our mobile application is, precisely, to have something special. It might be a menu, a component, a screen, something that might be really needed or even the main functionality for our application or just as an additional feature. In addition, by creating our custom view we can actually optimize the performance of our application. We can create a specific way of layouting widgets that otherwise will need many hierarchy layers by just using standard Android layouts or a custom view that simplifies rendering or user interaction. On the other hand, we can easily fall in the error of trying to build everything custom. Android provides an awesome list of widget and layout components that manages a lot of things for ourselves. If we ignore the basic Android framework and try to build everything by ourselves it would be a lot of work, potentially struggling with a lot of issues and errors that the Android OS developers already faced or, at least, very similar ones and, to put it up in one sentence, we would be reinventing the wheel. Examples in the market We all probably use great apps that are built only using the standard Android UI widgets and layouts, but there are many others that have some custom views that we don't know or we haven't really noticed. The custom views or layouts can sometimes be very subtle and hard to spot. We'd not be the first ones to have a custom view or layout in our application. In fact, many popular apps have some custom elements on them. For example, the Etsy application had a custom layout called StaggeredGridView. It was even published as open source in GitHub. It has been deprecated since 2015 in favor of Google's own StaggeredGridLayoutManager used together with RecyclerView. More information refer to: https://github.com/etsy/AndroidStaggeredGrid https://developer.android.com/reference/android/support/v7/widget/StaggeredGridLayoutManager.html. Parameterizing our custom view We have our custom view that adapts to multiple sizes now, that's good, but, what happens if we need another custom view that paints the background blue instead of red? And yellow? we shouldn't have to copy the custom view class for each customization. Luckily, we can set parameters on the XML layout and read them from our custom view. First, we need to define the type of parameters we will use on our custom view. We've to create a file called attrs.xml: <?xml version="1.0" encoding="utf-8"?> <resources> <declare-styleable name="OwnCustomView"> <attr name="fillColor" format="color"/> </declare-styleable> </resources> Then, we have to add a new different namespace on our layout file where we want to use following new parameter: <?xml version="1.0" encoding="utf-8"?> <ScrollView android_orientation="vertical" android_layout_width="match_parent" android_layout_height="match_parent"> <LinearLayout android_layout_width="match_parent" android_layout_height="wrap_content" android_orientation="vertical" android_padding="@dimen/activity_vertical_margin"> <com.packt.rrafols.customview.OwnCustomView android_layout_width="match_parent" android_layout_height="wrap_content" app_fillColor="@android:color/holo_blue_dark"/> </LinearLayout> </ScrollView> Now that we have this defined, let's see how we can read it from our custom view class. int fillColor; TypedArray ta = context.getTheme().obtainStyledAttributes(attributeSet, R.styleable.OwnCustomView, 0, 0); try { fillColor = ta.getColor(R.styleable.OwnCustomView_fillColor, DEFAULT_FILL_COLOR); } finally { ta.recycle(); } By getting a TypedArray using the styled attribute ID Android tools created for us after saving the attrs.xml file, we'll be able to query for the value of those parameters set on the XML layout file. More information about how to obtain a TypedArray can be found on the Android documentation: https://developer.android.com/reference/android/content/res/Resources.Theme.html#obtainStyledAttributes(android.util.AttributeSet, int[], int, int). For more information about TypedArray refer to: https://developer.android.com/reference/android/content/res/TypedArray.html. In this example, we created an attribute named fillColor which will be a formatted as a color. This format, or basically, the type of the attribute is very important to limit the kind of values we can set, and how these values can be retrieved afterwards from our custom view. Also, for each parameter we define, we'll get a R.styleable.<name>_<parameter_name> index in the TypedArray. In the code above, we're querying for the fillColor using the R.styleable.OwnCustomView_fillColor index. We shouldn't forget to recycle the TypedArray after using it so it can be reused, but once recycled, we can't use it again. Many custom views will only need to draw something in a special way, that's the reason we created them as custom views, but many others will need to react to user events. For example, how our custom view will behave when the user clicks or drags on top of it? Basic event handling Let's start by adding some basic event handling to our custom views. Reacting to touches One of the first things we'd to implement is to react to touch events. Android provides us with the method onTouchEvent() that we can override in our custom view. Overriding this method, we'll get any touch event happening on top of it. To see how it works, let's add it to the custom view: @Override public boolean onTouchEvent(MotionEvent event) { Log.d(TAG, "touch: " + event); return super.onTouchEvent(event); } Let's also add a log call to see what events do we receive. If we run this code and we touch on top of our view, we'll get: D/com.packt.rrafols.customview.CircularActivityIndicator: touch: MotionEvent { action=ACTION_DOWN, actionButton=0, id[0]=0, x[0]=644.3645, y[0]=596.55804, toolType[0]=TOOL_TYPE_FINGER, buttonState=0, metaState=0, flags=0x0, edgeFlags=0x0, pointerCount=1, historySize=0, eventTime=30656461, downTime=30656461, deviceId=9, source=0x1002 } As we can see there is a lot of information on the event, the coordinates, the action type, the time, but even if we perform more actions on it, we'll only get ACTION_DOWN events. That's because the default implementation of view is not clickable. If we don't set the clickable flag on the view, onTouchEvent() will return false and ignore further events. Method onTouchEvent() has to return true if the event has been processed or false if it hasn't. If we receive an event in our custom view and we don't know what to do it or we're not interested in that kind of events, we should return false so it can be processed by our view's parent or by any other component or the system. To receive more type of events, we can do two things: Set the view as clickable by using setClickable(true) Implement our own logic and process the events ourselves in our custom class. As, we'll implement more complex events, we'll go for the second option. Let's do a quick test, change the method to return simply true instead of calling the parent method: @Override public boolean onTouchEvent(MotionEvent event) { Log.d(TAG, "touch: " + event); return true; } Now we should receive many other types of events as follows: ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_DOWN, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_UP, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_DOWN, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_MOVE, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_MOVE, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_MOVE, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_UP, ...CircularActivityIndicator: touch: MotionEvent { action=ACTION_DOWN, Using the Paint class We've been drawing some primitives until now, but Canvas provides us with many more primitive rendering methods. We'll briefly cover some of them, but before let's first talk about the Paint class as we haven't introduced it properly. According to the official definition the Paint class hold the style and color information about how to draw primitives, text and bitmaps. If we check the examples we've been building, we created a Paint object on our class constructor, or on the onCreate method, and we used it to draw primitives later on our onDraw method. As, for instance, we set our background Paint instance Style to Paint.Style.FILL, it'll fill the primitive, but we can change it to Paint.Style.STROKE if we only want to draw the border or the strokes of the silhouette. We can draw both using Paint.Style.FILL_AND_STROKE. To see the Paint.Style.STROKE in action, we'll draw a black border on top of our selected colored bar in our custom view. Let's start by defining a new Paint object called indicatorBorderPaint and initialize it on our class constructor: indicatorBorderPaint = new Paint(); indicatorBorderPaint.setAntiAlias(false); indicatorBorderPaint.setColor(BLACK_COLOR); indicatorBorderPaint.setStyle(Paint.Style.STROKE); indicatorBorderPaint.setStrokeWidth(BORDER_SIZE); indicatorBorderPaint.setStrokeCap(Paint.Cap.BUTT); We also defined a constant with the size of the border line and set the stroke width to this size. If we set the width to 0, Android guaranties it'll use a single pixel to draw the line. As we want to draw a thick black border, this is not our case right now. In addition, we set the stroke cap to Paint.Cap.BUTT to avoid the stroke overflowing its path. There are two more Caps we can use, Paint.Cap.SQUARE and Paint.Cap.ROUND. These last two will end the stroke respectively with a circle, rounding the stroke, or a square. Let's see the differences between the three Caps and also introduce the drawLine primitive. First of all, we create an array with all three Cap, so we can easily iterate between them and create a more compact code: private Paint.Cap[] caps = new Paint.Cap[] { Paint.Cap.BUTT, Paint.Cap.ROUND, Paint.Cap.SQUARE }; Now, on our onDraw method, let's draw a line using each of the Cap using the drawLine(float startX, float startY, float stopX, float stopY, Paint paint) method. int xPos = (getWidth() - 100) / 2; int yPos = getHeight() / 2 - BORDER_SIZE * CAPS.length / 2; for(int i = 0; i < CAPS.length; i++) { indicatorBorderPaint.setStrokeCap(CAPS[i]); canvas.drawLine(xPos, yPos, xPos + 100, yPos, indicatorBorderPaint); yPos += BORDER_SIZE * 2; } indicatorBorderPaint.setStrokeCap(Paint.Cap.BUTT); Summary In this article we have seen the reasoning behind why we might want to build custom views and layouts. Android provides a great basic framework for creating UIs and not using it would be a mistake. Not every component, button or widget has to be developed completely custom but, by doing it so in the right spot, we can add an extra feature that might make our application remembered. We have also seen basics of event handling. Resources for Article: Further resources on this subject: Offloading work from the UI Thread on Android [article] Getting started with Android Development [article] Practical How-To Recipes for Android [article]
Read more
  • 0
  • 0
  • 1649

article-image-command-line-tools
Packt
06 Jul 2017
9 min read
Save for later

Command-Line Tools

Packt
06 Jul 2017
9 min read
In this article by Aaron Torres, author of the book, Go Cookbook, we will cover the following recipes: Using command-line arguments Working with Unix pipes An ANSI coloring application (For more resources related to this topic, see here.) Using command-line arguments This article will expand on other uses for these arguments by constructing a command that supports nested subcommands. This will demonstrate Flagsets and also using positional arguments passed into your application. This recipe requires a main function to run. There are a number of third-party packages for dealing with complex nested arguments and flags, but we'll again investigate doing so using only the standard library. Getting ready You need to perform the following steps for the installation: Download and install Go on your operating system at https://golang.org/doc/install and configure your GOPATH. Open a terminal/console application. Navigate to your GOPATH/src and create a project directory, for example, $GOPATH/src/github.com/yourusername/customrepo. All code will be run and modified from this directory. Optionally, install the latest tested version of the code using the go get github.com/agtorre/go-cookbook/ command. How to do it... From your terminal/console application, create and navigate to the chapter2/cmdargs directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/cmdargs or use this as an exercise to write some of your own. Create a file called cmdargs.go with the following content: package main import ( "flag" "fmt" "os" ) const version = "1.0.0" const usage = `Usage: %s [command] Commands: Greet Version ` const greetUsage = `Usage: %s greet name [flag] Positional Arguments: name the name to greet Flags: ` // MenuConf holds all the levels // for a nested cmd line argument type MenuConf struct { Goodbye bool } // SetupMenu initializes the base flags func (m *MenuConf) SetupMenu() *flag.FlagSet { menu := flag.NewFlagSet("menu", flag.ExitOnError) menu.Usage = func() { fmt.Printf(usage, os.Args[0]) menu.PrintDefaults() } return menu } // GetSubMenu return a flag set for a submenu func (m *MenuConf) GetSubMenu() *flag.FlagSet { submenu := flag.NewFlagSet("submenu", flag.ExitOnError) submenu.BoolVar(&m.Goodbye, "goodbye", false, "Say goodbye instead of hello") submenu.Usage = func() { fmt.Printf(greetUsage, os.Args[0]) submenu.PrintDefaults() } return submenu } // Greet will be invoked by the greet command func (m *MenuConf) Greet(name string) { if m.Goodbye { fmt.Println("Goodbye " + name + "!") } else { fmt.Println("Hello " + name + "!") } } // Version prints the current version that is // stored as a const func (m *MenuConf) Version() { fmt.Println("Version: " + version) } Create a file called main.go with the following content: package main import ( "fmt" "os" "strings" ) func main() { c := MenuConf{} menu := c.SetupMenu() menu.Parse(os.Args[1:]) // we use arguments to switch between commands // flags are also an argument if len(os.Args) > 1 { // we don't care about case switch strings.ToLower(os.Args[1]) { case "version": c.Version() case "greet": f := c.GetSubMenu() if len(os.Args) < 3 { f.Usage() return } if len(os.Args) > 3 { if.Parse(os.Args[3:]) } c.Greet(os.Args[2]) default: fmt.Println("Invalid command") menu.Usage() return } } else { menu.Usage() return } } Run the go build command. Run the following commands and try a few other combinations of arguments: $./cmdargs -h Usage: ./cmdargs [command] Commands: Greet Version $./cmdargs version Version: 1.0.0 $./cmdargs greet Usage: ./cmdargs greet name [flag] Positional Arguments: name the name to greet Flags: -goodbye Say goodbye instead of hello $./cmdargs greet reader Hello reader! $./cmdargs greet reader -goodbye Goodbye reader! If you copied or wrote your own tests go up one directory and run go test, and ensure all tests pass. How it works... Flagsets can be used to set up independent lists of expected arguments, usage strings, and more. The developer is required to do validation on a number of arguments, parsing in the right subset of arguments to commands, and defining usage strings. This can be error prone and requires a lot of iteration to get it completely correct. The flag package makes parsing arguments much easier and includes convenience methods to get the number of flags, arguments, and more. This recipe demonstrates basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. Working with Unix pipes Unix pipes are useful when passing the output of one program to the input of another. Consider the following example: $ echo "test case" | wc -l 1 In a Go application, the left-hand side of the pipe can be read in using os.Stdin and acts like a file descriptor. To demonstrate this, this recipe will take an input on the left-hand side of a pipe and return a list of words and their number of occurrences. These words will be tokenized on white space. Getting ready Refer to the Getting Ready section of the Using command-line arguments recipe. How to do it... From your terminal/console application, create a new directory, chapter2/pipes. Navigate to that directory and copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/pipes or use this as an exercise to write some of your own. Create a file called pipes.go with the following content: package main import ( "bufio" "fmt" "os" ) // WordCount takes a file and returns a map // with each word as a key and it's number of // appearances as a value func WordCount(f *os.File) map[string]int { result := make(map[string]int) // make a scanner to work on the file // io.Reader interface scanner := bufio.NewScanner(f) scanner.Split(bufio.ScanWords) for scanner.Scan() { result[scanner.Text()]++ } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading input:", err) } return result } func main() { fmt.Printf("string: number_of_occurrencesnn") for key, value := range WordCount(os.Stdin) { fmt.Printf("%s: %dn", key, value) } }   Run echo "some string" | go run pipes.go. You may also run: go build echo "some string" | ./pipes You should see the following output: $ echo "test case" | go run pipes.go string: number_of_occurrences test: 1 case: 1 $ echo "test case test" | go run pipes.go string: number_of_occurrences test: 2 case: 1 If you copied or wrote your own tests, go up one directory and run go test, and ensure that all tests pass. How it works... Working with pipes in go is pretty simple, especially if you're familiar with working with files. This recipe uses a scanner to tokenize the io.Reader interface of the os.Stdin file object. You can see how you must check for errors after completing all of the reads. An ANSI coloring application Coloring an ANSI terminal application is handled by a variety of code before and after a section of text that you want colored. This recipe will explore a basic coloring mechanism to color the text red or keep it plain. For a more complete application, take a look at https://github.com/agtorre/gocolorize, which supports many more colors and text types implements the fmt.Formatter interface for ease of printing. Getting ready Refer to the Getting Ready section of the Using command line arguments recipe. How to do it... From your terminal/console application, create and navigate to the chapter2/ansicolor directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/ansicolor or use this as an exercise to write some of your own. Create a file called color.go with the following content: package ansicolor import "fmt" //Color of text type Color int const ( // ColorNone is default ColorNone = iota // Red colored text Red // Green colored text Green // Yellow colored text Yellow // Blue colored text Blue // Magenta colored text Magenta // Cyan colored text Cyan // White colored text White // Black colored text Black Color = -1 ) // ColorText holds a string and its color type ColorText struct { TextColor Color Text string } func (r *ColorText) String() string { if r.TextColor == ColorNone { return r.Text } value := 30 if r.TextColor != Black { value += int(r.TextColor) } return fmt.Sprintf("33[0;%dm%s33[0m", value, r.Text) } Create a new directory named example. Navigate to example and then create a file named main.go with the following content. Ensure that you modify the ansicolor import to use the path you set up in step 1: package main import ( "fmt" "github.com/agtorre/go-cookbook/chapter2/ansicolor" ) func main() { r := ansicolor.ColorText{ansicolor.Red, "I'm red!"} fmt.Println(r.String()) r.TextColor = ansicolor.Green r.Text = "Now I'm green!" fmt.Println(r.String()) r.TextColor = ansicolor.ColorNone r.Text = "Back to normal..." fmt.Println(r.String()) } Run go run main.go. Alternatively, you may also run the following: go build ./example You should see the following with the text colored if your terminal supports the ANSI coloring format: $ go run main.go I'm red! Now I'm green! Back to normal... If you copied or wrote your own tests, go up one directory and run go test, and ensure that all the tests pass. How it works... This application makes use of a struct keyword to maintain state of the colored text. In this case, it stores the color of the text and the value of the text. The final string is rendered when you call the String() method, which will either return colored text or plain text depending on the values stored in the struct. By default, the text will be plain. Summary In this article, we demonstrated basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. We saw how to work with Unix pipes and explored a basic coloring mechanism to color text red or keep it plain. Resources for Article: Further resources on this subject: Building a Command-line Tool [article] A Command-line Companion Called Artisan [article] Scaffolding with the command-line tool [article]
Read more
  • 0
  • 0
  • 2869

article-image-continous-delivery
Packt
06 Jul 2017
11 min read
Save for later

Continous Delivery

Packt
06 Jul 2017
11 min read
In this article, Jonathan Baier, the author of Getting Started with Kubernetes - Second Edition, will show the reader how to integrate their build pipeline and deployments with a Kubernetes cluster. It will cover the concept of using Gulp.js and Jenkins in conjunction with your Kubernetes cluster. This article will discuss the following topics: Integrating with continuous deployment pipeline Using Gulp.js with Kubernetes Integrating Jenkins with Kubernetes Integrating with continuous delivery pipeline Continuous integration and delivery are key components to modern development shops. Speed to market or mean-time-to-revenue are crucial for any company that is creating their own software. We'll see how Kubernetes can help you. CI/CD (short for Continuous Integration / Continuous Delivery) often requires ephemeral build and test servers to be available whenever changes are pushed to the code repository. Docker and Kubernetes are well suited for this task, as it's easy to create containers in a few seconds and just as easy to remove them after builds are run. In addition, if you already have a large portion of infrastructure available on your cluster, it can make sense to utilize the idle capacity for builds and testing. In this article, we will explore two popular tools used in building and deploying software: Gulp.js: This is a simple task runner used to automate the build process using JavaScript and Node.js Jenkins: This is a fully-fledged continuous integration server Gulp.js Gulp.js gives us the framework to do Build as code. Similar to Infrastructure as code, this allows us to programmatically define our build process. We will walk through a short example to demonstrate how you can create a complete workflow from a Docker image build to the final Kubernetes service. Prerequisites For this section of the article, you will need a NodeJS environment installed and ready including the node package manager (npm). If you do not already have these packages installed, you can find instructions for installing them at https://docs.npmjs.com/getting-started/installing-node. You can check whether NodeJS is installed correctly with a node -v command. You'll also need Docker CE and a DockerHub account to push a new image. You can find instructions to install Docker CE at https://docs.docker.com/installation/. You can easily create a DockerHub account at https://hub.docker.com/. After you have your credentials, you can log in with the CLI using $ docker login command. Gulp build example Let's start by creating a project directory named node-gulp: $ mkdir node-gulp $ cd node-gulp Next, we will install the gulp package and check whether it's ready by running the npm command with the version flag, as follows: $ npm install -g gulp You may need to open a new terminal window to make sure that gulp is on your path. Also, make sure to navigate back to your node-gulp directory: $ gulp -v Next, we will install gulp locally in our project folder as well as the gulp-git and gulp-shell plugins, as follows: $ npm install --save-dev gulp $ npm install gulp-git -save $ npm install --save-dev gulp-shell Finally, we need to create a Kubernetes controller and service definition file, as well as a gulpfile.js file, to run all our tasks. Again, these files are available in the book file bundle, if you wish to copy them instead. Refer to the following code: apiVersion: v1 kind: ReplicationController metadata: name: node-gulp labels: name: node-gulp spec: replicas: 1 selector: name: node-gulp template: metadata: labels: name: node-gulp spec: containers: - name: node-gulp image: <your username>/node-gulp:latest imagePullPolicy: Always ports: - containerPort: 80 Listing 7-1: node-gulp-controller.yaml As you can see, we have a basic controller. You will need to replace <your username>/node-gulp:latest with your Docker Hub username: apiVersion: v1 kind: Service metadata: name: node-gulp labels: name: node-gulp spec: type: LoadBalancer ports: - name: http protocol: TCP port: 80 selector: name: node-gulp Listing 7-2: node-gulp-service.yaml Next, we have a simple service that selects the pods from our controller and creates an external load balancer for access, as earlier: var gulp = require('gulp'); var git = require('gulp-git'); var shell = require('gulp-shell'); // Clone a remote repo gulp.task('clone', function(){ return git.clone('https://github.com/jonbaierCTP/getting-started-with-kubernetes-se.git', function (err) { if (err) throw err; }); }); // Update codebase gulp.task('pull', function(){ return git.pull('origin', 'master', {cwd: './getting-started-with-kubernetes-se'}, function (err) { if (err) throw err; }); }); //Build Docker Image gulp.task('docker-build', shell.task([ 'docker build -t <your username>/node-gulp ./getting-started-with-kubernetes-se/docker-image-source/container-info/', 'docker push <your username>/node-gulp' ])); //Run New Pod gulp.task('create-kube-pod', shell.task([ 'kubectl create -f node-gulp-controller.yaml', 'kubectl create -f node-gulp-service.yaml' ])); //Update Pod gulp.task('update-kube-pod', shell.task([ 'kubectl delete -f node-gulp-controller.yaml', 'kubectl create -f node-gulp-controller.yaml' ])); Listing 7-3: gulpfile.js Finally, we have the gulpfile.js file. This is where all our build tasks are defined. Again, fill in your Docker Hub username in both the <your username>/node-gulp sections. Looking through the file, first, the clone task downloads our image source code from GitHub. The pull tasks execute a git pull on the cloned repository. Next, the docker-build command builds an image from the container-info subfolder and pushes it to DockerHub. Finally, we have the create-kube-pod and update-kube-pod commands. As you can guess, the create-kube-pod command creates our controller and service for the first time, whereas the update-kube-pod command simply replaces the controller. Let's go ahead and run these commands and see our end-to-end workflow: $ gulp clone $ gulp docker-build The first time through, you can run the create-kube-pod command, as follows: $ gulp create-kube-pod This is all there is to it. If we run a quick kubectl describe command for the node-gulp service, we can get the external IP for our new service. Browse to that IP and you'll see the familiar container-info application running. Note that the host starts with node-gulp, just as we named it in the previously mentioned pod definition: Service launched by Gulp build On subsequent updates, run the pull and update-kube-pod commands, as shown here: $ gulp pull $ gulp docker-build $ gulp update-kube-pod This is a very simple example, but you can begin to see how easy it is to coordinate your build and deployment end to end with a few simple lines of code. Next, we will look at how to use Kubernetes to actually run builds using Jenkins. Kubernetes plugin for Jenkins One way we can use Kubernetes for our CI/CD pipeline is to run our Jenkins build slaves in a containerized environment. Luckily, there is already a plugin, written by Carlos Sanchez, which allows you to run Jenkins slaves in Kubernetes' pods. Prerequisites You'll need a Jenkins server handy for this next example. If you don't have one you can use, there is a Docker image available at https://hub.docker.com/_/jenkins/. Running it from the Docker CLI is as simple as this: docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins Installing plugins Log in to your Jenkins server, and from your home dashboard, click on Manage Jenkins. Then, select Manage Plugins from the list. A note for those installing a new Jenkins server: When you first log in to the Jenkins server, it asks you to install plugins. Choose the default ones or no plugins will be installed: Jenkins main dashboard The credentials plugin is required, but should be installed by default. We can check the Installed tab if in doubt, as shown in the following screenshot: Jenkins installed plugins Next, we can click on the Available tab. The Kubernetes plugin should be located under Cluster Management and Distributed Build or Misc (cloud). There are many plugins, so you can alternatively search for Kubernetes on the page. Check the box for Kubernetes Plugin and click on Install without restart. This will install theKubernetes Plugin and the Durable Task Plugin: Plugin installation If you wish to install a nonstandard version or just like to tinker, you can optionally download the plugins. The latest Kubernetes and Durable Task plugins can be found here:        Kubernetes plugin: https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin        Durable Task plugin: https://wiki.jenkins-ci.org/display/JENKINS/Durable+Task+PluginNext, we can click on the Advanced tab and scroll down to Upload Plugin. Navigate to the durable-task.hpi file and click on Upload. You should see a screen that shows an installing progress bar. After a minute or two, it will update to Success.Finally, install the main Kubernetes plugin. On the left-hand side, click on Manage Plugins and then the Advanced tab once again. This time, upload the kubernetes.hpi file and click on Upload. After a few minutes, the installation should be complete. Configuring the Kubernetes plugin Click on Back to Dashboard or the Jenkins link in the top-left corner. From the main dashboard page, click on the Credentials link. Choose a domain from the list; in my case, I just used the default Global credentials domain. Click on Add Credentials: Add credentials screen Leave Kind as Username with password and Scope as Global. Add your Kubernetes admin credentials. Remember that you can find these by running the config command: $ kubectl config view You can leave ID blank, give it a sensible description, and click on the OK button. Now that we have our credentials saved, we can add our Kubernetes server. Click on the Jenkins link in the top-left corner and then Manage Jenkins. From there, select Configure System and scroll all the way down to the Cloud section. Select Kubernetes from the Add a new cloud dropdown and a Kubernetes section will appear, as follows: New Kubernetes cloud settings You'll need to specify the URL for your master in the form of https://<Master IP>/. Next, choose the credentials we added from the drop-down list. Since Kubernetes use a self-signed certificate by default, you'll also need to check the Disable https certificate check checkbox. Click on Test Connection and if all goes well, you should see Connection successful appearing next to the button. If you are using an older version of the plugin, you may not see the Disable https certificate check checkbox. If this is the case, you will need to install the self-signed certificate directly on the Jenkins Master. Finally, we will add a pod template by choosing Kubernetes Pod Template from the Add Pod Template dropdown next to Images. This will create another new section. Use jenkins-slave for the Name and Labels section. Click on Add next to Containers and again use jenkins-slave for the Name. Use csanchez/jenkins-slave for the Docker Image and leave /home/jenkins for the Working Directory. Labels can be used later on in the build settings to force the build to use the Kubernetes cluster: Kubernetes cluster addition Here is the Pod Template that expands below the cluster addition: Kubernetes pod template Click on Save and you are all set. Now, new builds created in Jenkins can use the slaves in the Kubernetes pod we just created. Here is another note about firewalls. The Jenkins Master will need to be reachable by all the machines in your Kubernetes cluster, as the pod could land anywhere. You can find out your port settings in Jenkins under Manage Jenkins and Configure Global Security. Bonus fun Fabric8 bills itself as an integration platform. It includes a variety of logging, monitoring, and continuous delivery tools. It also has a nice console, an API registry, and a 3D game that lets you shoot at your pods. It's a very cool project, and it actually runs on Kubernetes. Refer to http://fabric8.io/. It's an easy single command to set up on your Kubernetes cluster, so refer to http://fabric8.io/guide/getStarted/gke.html. Summary We looked at two continuous integration tools that can be used with Kubernetes. We did a brief walk-through of deploying the Gulp.js task on our cluster. We also looked at a new plugin used to integrate Jenkins build slaves into your Kubernetes cluster. You should now have a better sense of how Kubernetes can integrate with your own CI/CD pipeline.
Read more
  • 0
  • 0
  • 1068

article-image-iis-10-fundamentals
Packt
06 Jul 2017
12 min read
Save for later

IIS 10 Fundamentals

Packt
06 Jul 2017
12 min read
In this article by Ashraf Khan, the author of the book Microsoft IIS 10 Cookbook, helps us to understand the following topics: Understanding IIS 10 Basic requirements of IIS 10 Understanding application pools on IIS 10 Installation of lower framework version Configuration of application pool on IIS 10 (For more resources related to this topic, see here.) Understanding IIS 10 In this recipe, we will understand how to work with IIS 10's new features. We will have an overview of the following new features added to IIS 10: HTTP/2 HTTP/2 requests are now faster than ever. This feature is active by default with IIS 10 on Windows Server 2016 and Windows 10. IIS 10 on Nano Server IIS 10 is easy and quick to install on Nano Server. You can manage IIS 10 remotely with PowerShell or the IIS Manager console. Nano Server is much faster and consumes less memory and disk space that the full-fledged Windows Server. Rebooting is also faster so that you can manage time effectively. Wildcard host headers IIS 10 support the subdomain feature for your parent domain name. This will really help you manage more subdomains with the same primary domain name. PowerShell 5 cmdlets IIS 10 adds a new, simplified PowerShell module for quick and easy management. You can use PowerShell to access server-management features remotely. It also supports existing WebAdministration cmdlets. FTP FTP is a simple protocol for transferring files. This system can transfer files inside your company LAN and WAN using the default port, 21. IIS 10 includes an FTP server that is easy to configure and manage. FTPS FTPS is the same as FTP, with the only difference that it is secure. FTPS transfers data with SSL. We are going to use HTTPS port 443. For this, we need to create and install an SSL certificate that encrypts and decrypts data securely. SSL ensures that all data passed between web server and browser remains private and consistent during upload and download over private or public networks. Multi-web hosting IIS 10 allows you to create multiple websites and multiple applications on the same server. You can easily manage and create a new virtual directory located in the default location or a custom location. Virtual directories IIS 10 makes it easy to manage and create the virtual directories you require. Understanding application pools on IIS 10 In this recipe, we are going to understand application pools. We can simply say that application pool is the heart of IIS 10. Application pools are logical groupings of web applications that will execute in a common process, thereby allowing greater granularity of which programs are clustered together in a single process. For example, if you required every Web Application to execute in a separate process, you simply go and create an Application Pool for each application of different frameworks versions.  Let's say that we have more than one version of website, one which supports framework 2.0 and another one supporting framework 4.0 or different application like PHP or WordPress. All these website process are managed through application pool. Getting ready To step through this recipe , you will need a running IIS 10. You should also be having Administrative privilege. No other prerequisites are required. How to do it... Open the Server Manager on Windows Server 2016. Click on Tools menu and open the IIS Manager. Expand the IIS server (WIN2016IIS) this is the localhost server name WIN2016IIS. We get the listed application pools and sites. In Application Pools, you will get IIS 10 DefaultsAppPool as shown in above figure, also you get Actions panel in right side of the screen where you may add application pools. Click on DefaultAppPool, then you will get the Actions panel of DefaultAppPool. Here you will get an option for Application Pool Tasks highlighted in right side, where you may Start, Stop, and Recycle the services of IIS 10. In Edit Application Pool section, you can change the settings of application pool as Basic Settings..., Advanced Settings..., Rename the application pool and you may also do the Recycling... How it works... Let's take a look at what we explored in IIS Manager and Application Pools. We understood the basics of application pools and the property where we can get the changes done as per our requirement. IIS 10 default application pool framework is v4.0 which is supported upto v4.6, but we will get some more option for installing different versions of application pool. We can easily customize the application pool, which helps us to fulfill our typical web application requirement. We have several options for application pool in action pane. We can add new application pool, we can start, stop and recycle the application pool task. We can do the editing and automated recycling. Now we are going to learn in the next recipe more about application pools for Installation of lower framework version. Installation of lower framework version In this recipe, we are going to install framework 3.5 on Windows Server 2016. Default IIS 10 has the framework 4.0. We will install the lower version of framework which supports the web application of Version 2.0 to Version 3.5 .NET framework. Let's start now if you have your own web application which you had created a few years back and it was developed in v2.0 .NET framework. You want to run this application on IIS 10.  We are going to cover this topic in this recipe. Getting ready To step through this recipe you need to install framework version3.5, v3.5 framework is based on v2.0 framework. You will need a Windows Server 2016. You should be having a Window Server 2016 Operating System media for framework 3.5 or Internet connected on Window server 2016. You should have Administrative privilege. No other prerequisites are required. How to do it.... Open the Server Manager on Windows Server 2016, click on highlighted Add roles and features option. Click on Next until you get the Select features wizard. You can see the next figure. Click on Features panel and click on the check box .NET Framework 3.5 Features. It will also install the 2.0 supported framework. Move to next wizard as shown in figure. There is a warning coming before the installation: Do you need to specify an alternate source path? One or more installation selection are missing source files on the destination. We have to provide the Installation media sourcesSxS folder path. Click on Specify an alternate source path. See next figure for more details. Here we have Windows Server 2016 media in D:drive. This is the media path in our case which I have downloaded but in your case it can be different path to locate where you already had downloaded. There is a folder which is called sources and sub folder(SxS). Inside framework 3, installation file is available. You may see the next figure. Now you know where the source folder is. Come to confirm Screen and click on Install.  The next figure shows Installation progress on WIN2016IIS. Click on close when installation is completed, you have framework 3.5 available on your server. Now you have to check whether framework 3.5 has been installed or not. It should be available in features wizard. Open the Server Manager, click on Add roles and features. Click next and next until you get the Select Features wizard. You will see .NET framework 3.5 check box checked with gray color which is disabled. You can not check and uncheck the checkbox. You can see the next figure.   As shown in the preceding figure, it has been confirmed that .NET framework 3.5 has been installed. This can be installed through PowerShell. We can install directly from Windows Update, you need Internet connectivity and running Windows Update service on Window Server. How it works... In this recipe, IIS administrator is installed in the framework v3.5. The version 3.5 framework on window server 2016 helps us to run built in application .NET framework v2.0 or v3.5. The framework v3.5 processes the application which is built in framework v3.5 or v2.0. We also find out where is the sourcesSxS folder, after installation we verified that this .NET framework v3.5 is available. We are going to create application pool which will support the .NET framework v3.5. Configuration of application pool on IIS 10 In this recipe, we will have an overview of application pool property. We will check the default configuration of Basic Settings, Recycling and Advanced Settings. This is very helpful for developer or system administrator as we can do the configuration of different property of different application pool based upon application requirement. Getting ready For this recipe, we need IIS 1o and .NET framework of any version which is installed on IIS 10. You must have Administrative privilege. No other prerequisites are required. How to do it... Open the Server Manager on Windows Server 2016. Click on Tools menu and open the IIS Manager. Expand the IIS server (WIN2016IIS). We get the listed Application Pools. You may see in the next figure. Now we have already created application pool which is displayed in Application Pools. We created 2and3.5AppPool, Asp.net and DefaultAppPool (Default one). In the Actions panel, we can add many application pools and we can set any one of the created application pool as default application pool. The default application pool helps us when we are creating a website. The default application pool will be selected as an application pool. Select the 2and3.5AppPool from application pools. You will see the Actions pane having a list of available properties in which you can do some changes if needed. The version of 2and3.5AppPool is v2.0, you can see in the next figure. See the Actions panel, Application Pool Tasks and Edit Application Pool which we selected. From the Application Tool Tasks we can Start, Stop and Recycle... the application pool. Now let's come to the basic property of application pool. Click on Basic Settings... from Edit Application Pool, see the next figure which will appear after clicking. Basic Settings... is nothing but a quick settings to change limited number of things. We can change the .NET framework version to framework v4.0 or framework v3.5(version 2.0 is updated version 3.5). We can change the Managed pipeline mode to Integrated or Classic, also we can check or uncheck the start option. Next is the Advanced Settings... which has more options for customization of relevant Application Pool. Click on Advanced Settings..., the next figure will open. We have more settings option available in Advanced Settings... window. You may change the .NET framework version, you can select 32 bit application support true or false. Queue Length is 1000 by default. You may reduce or increase as you need. Start Mode should be OnDemand or Always Running. We can also customize utilization of CPU which helps you to manage the load of each application and their performance. Process Model will help you to define task for application pool availability and accessibility. We can see more about application pool in next figure. Rapid-Fail Protection is generally used for fail over. We can setup the fail over server and configuration. Recycling is to refresh the application pool overlapped. We can set a default recycling value. We can do more specific settings through Recycling settings by clicking on Recycling.... You may see your recycling conditions window in the next figure. Recycling is based on conditions like virtual memory usage, private memory usage, specific time, regular time intervals and fixed number of request, also it will generate you a log file which will help you understand which one was executed at what time. Here you will set the fixed intervals based on time and based on number of request, or specific time and based on Memory utilization, virtual and private memory. Click Next. In the Recycling Events to Log window, we generate log on the recycling events. How it works.... In this recipe we have learned three properties of IIS Application - Basic property, Advanced property and Recycling. We can use these properties for web application which we will host in IIS server to process through the application pool. When we are hosting a web application, there is always some requirement which we need to configure in application pool settings. For example, our management decides that we need to limit the queue of 2and3.5apppool application. We can just go to advance settings and change it. In the next section, we are going to host v4.0 .NET framework website and we will make use of application pool v4.0. Summary In this article, we understood application pools in IIS 10 and how to install and configure them. We also understood how to install a lower framework version. Resources for Article: Further resources on this subject: Exploring Microsoft Dynamics NAV – An Introduction [article] The Microsoft Azure Stack Architecture [article] Setting up Microsoft Bot Framework Dev Environment [article]
Read more
  • 0
  • 0
  • 7725

article-image-ruby-strings
Packt
06 Jul 2017
9 min read
Save for later

Ruby Strings

Packt
06 Jul 2017
9 min read
In this article by Jordan Hudgens, the author of the book Comprehensive Ruby Programming, you'll learn about the Ruby String data type and walk through how to integrate string data into a Ruby program. Working with words, sentences, and paragraphs are common requirements in many applications. Additionally you learn how to: Employ string manipulation techniques using core Ruby methods Demonstrate how to work with the string data type in Ruby (For more resources related to this topic, see here.) Using strings in Ruby A string is a data type in Ruby and contains set of characters, typically normal English text (or whatever natural language you're building your program for), that you would write. A key point for the syntax of strings is that they have to be enclosed in single or double quotes if you want to use them in a program. The program will throw an error if they are not wrapped inside quotation marks. Let's walk through three scenarios. Missing quotation marks In this code I tried to simply declare a string without wrapping it in quotation marks. As you can see, this results in an error. This error is because Ruby thinks that the values are classes and methods. Printing strings In this code snippet we're printing out a string that we have properly wrapped in quotation marks. Please note that both single and double quotation marks work properly. It's also important that you do not mix the quotation mark types. For example, if you attempted to run the code: puts "Name an animal' You would get an error, because you need to ensure that every quotation mark is matched with a closing (and matching) quotation mark. If you start a string with double quotation marks, the Ruby parser requires that you end the string with the matching double quotation marks. Storing strings in variables Lastly in this code snippet we're storing a string inside of a variable and then printing the value out to the console. We'll talk more about strings and string interpolation in subsequent sections. String interpolation guide for Ruby In this section, we are going to talk about string interpolation in Ruby. What is string interpolation? So what exactly is string interpolation? Good question. String interpolation is the process of being able to seamlessly integrate dynamic values into a string. Let's assume we want to slip dynamic words into a string. We can get input from the console and store that input into variables. From there we can call the variables inside of a pre-existing string. For example, let's give a sentence the ability to change based on a user's input. puts "Name an animal" animal = gets.chomp puts "Name a noun" noun= gets.chomp p "The quick brown #{animal} jumped over the lazy #{noun} " Note the way I insert variables inside the string? They are enclosed in curly brackets and are preceded by a # sign. If I run this code, this is what my output will look: So, this is how you insert values dynamically in your sentences. If you see sites like Twitter, it sometimes displays personalized messages such as: Good morning Jordan or Good evening Tiffany. This type of behavior is made possible by inserting a dynamic value in a fixed part of a string and leverages string interpolation. Now, let's use single quotes instead of double quotes, to see what happens. As you'll see, the string was printed as it is without inserting the values for animal and noun. This is exactly what happens when you try using single quotes—it prints the entire string as it is without any interpolation. Therefore it's important to remember the difference. Another interesting aspect is that anything inside the curly brackets can be a Ruby script. So, technically you can type your entire algorithm inside these curly brackets, and Ruby will run it perfectly for you. However, it is not recommended for practical programming purposes. For example, I can insert a math equation, and as you'll see it prints the value out. String manipulation guide In this section we are going to learn about string manipulation along with a number of examples of how to integrate string manipulation methods in a Ruby program. What is string manipulation? So what exactly is string manipulation? It's the process of altering the format or value of a string, usually by leveraging string methods. String manipulation code examples Let's start with an example. Let's say I want my application to always display the word Astros in capital letters. To do that, I simply write: "Astros".upcase Now if I always a string to be in lower case letters I can use the downcase method, like so: "Astros".downcase Those are both methods I use quite often. However there are other string methods available that we also have at our disposal. For the rare times when you want to literally swap the case of the letters you can leverage the swapcase method: "Astros".swapcase And lastly if you want to reverse the order of the letters in the string we can call the reverse method: "Astros".reverse These methods are built into the String data class and we can call them on any string values in Ruby. Method chaining Another neat thing we can do is join different methods together to get custom output. For example, I can run: "Astros".reverse.upcase The preceding code displays the value SORTSA. This practice of combining different methods with a dot is called method chaining. Split, strip, and join guides for strings In this section, we are going to walk through how to use the split and strip methods in Ruby. These methods will help us clean up strings and convert a string to an array so we can access each word as its own value. Using the strip method Let's start off by analyzing the strip method. Imagine that the input you get from the user or from the database is poorly formatted and contains white space before and after the value. To clean the data up we can use the strip method. For example: str = " The quick brown fox jumped over the quick dog " p str.strip When you run this code, the output is just the sentence without the white space before and after the words. Using the split method Now let's walk through the split method. The split method is a powerful tool that allows you to split a sentence into an array of words or characters. For example, when you type the following code: str = "The quick brown fox jumped over the quick dog" p str.split You'll see that it converts the sentence into an array of words. This method can be particularly useful for long paragraphs, especially when you want to know the number of words in the paragraph. Since the split method converts the string into an array, you can use all the array methods like size to see how many words were in the string. We can leverage method chaining to find out how many words are in the string, like so: str = "The quick brown fox jumped over the quick dog" p str.split.size This should return a value of 9, which is the number of words in the sentence. To know the number of letters, we can pass an optional argument to the split method and use the format: str = "The quick brown fox jumped over the quick dog" p str.split(//).size And if you want to see all of the individual letters, we can remove the size method call, like this: p str.split(//) And your output should look like this: Notice, that it also included spaces as individual characters which may or may not be what you want a program to return. This method can be quite handy while developing real-world applications. A good practical example of this method is Twitter. Since this social media site restricts users to 140 characters, this method is sure to be a part of the validation code that counts the number of characters in a Tweet. Using the join method We've walked through the split method, which allows you to convert a string into a collection of characters. Thankfully, Ruby also has a method that does the opposite, which is to allow you to convert an array of characters into a single string, and that method is called join. Let's imagine a situation where we're asked to reverse the words in a string. This is a common Ruby coding interview question, so it's an important concept to understand since it tests your knowledge of how string work in Ruby. Let's imagine that we have a string, such as: str = "backwards am I" And we're asked to reverse the words in the string. The pseudocode for the algorithm would be: Split the string into words Reverse the order of the words Merge all of the split words back into a single string We can actually accomplish each of these requirements in a single line of Ruby code. The following code snippet will perform the task: str.split.reverse.join(' ') This code will convert the single string into an array of strings, for the example it will equal ["backwards", "am", "I"]. From there it will reverse the order of the array elements, so the array will equal: ["I", "am", "backwards"]. With the words reversed, now we simply need to merge the words into a single string, which is where the join method comes in. Running the join method will convert all of the words in the array into one string. Summary In this article, we were introduced to the string data type and how it can be utilized in Ruby. We analyzed how to pass strings into Ruby processes by leveraging string interpolation. We also learned the methods of basic string manipulation and how to find and replace string data. We analyzed how to break strings into smaller components, along with how to clean up string based data. We even introduced the Array class in this article. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Find closest mashup plugin with Ruby on Rails [article] Building tiny Web-applications in Ruby using Sinatra [article]
Read more
  • 0
  • 0
  • 6940
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-spark-streaming
Packt
06 Jul 2017
11 min read
Save for later

Spark Streaming

Packt
06 Jul 2017
11 min read
In this article by Romeo Kienzler, the author of the book Mastering Apache Spark 2.x - Second Edition, we will see Apache Streaming module is a stream processing-based module within Apache Spark. It uses the Spark cluster to offer the ability to scale to a high degree. Being based on Spark, it is also highly fault tolerant, having the ability to rerun failed tasks by check-pointing the data stream that is being processed. The following areas will be covered in this article after an initial section, which will provide a practical overview of how Apache Spark processes stream-based data: Error recovery and check-pointing TCP-based stream processing File streams Kafka stream source For each topic, we will provide a worked example in Scala, and will show how the stream-based architecture can be set up and tested. (For more resources related to this topic, see here.) Overview The following diagram shows potential data sources for Apache Streaming, such as Kafka, Flume, and HDFS: These feed into the Spark Streaming module, and are processed as Discrete Streams. The diagram also shows that other Spark module functionality, such as machine learning, can be used to process the stream-based data. The fully processed data can then be an output for HDFS, databases, or dashboards. This diagram is based on the one at the Spark streaming website, but we wanted to extend it for expressing the Spark module functionality:  When discussing Spark Discrete Streams, the previous figure, again taken from the Spark website at http://spark.apache.org/, is the diagram we like to use. The green boxes in the previous figure show the continuous data stream sent to Spark, being broken down into a Discrete Streams (DStream). The size of each element in the stream is then based on a batch time, which might be two seconds. It is also possible to create a window, expressed as the previous red box, over the DStream. For instance, when carrying out trend analysis in real time, it might be necessary to determine the top ten Twitter-based hashtags over a ten minute window. So, given that Spark can be used for stream processing, how is a stream created? The following Scala-based code shows how a Twitter stream can be created. This example is simplified because Twitter authorization has not been included, but you get the idea. The Spark Stream Context (SSC) is created using the Spark Context sc. A batch time is specified when it is created; in this case, 5 seconds. A Twitter-based DStream, called stream, is then created from the Streamingcontext using a window of 60 seconds: val ssc = new StreamingContext(sc, Seconds(5) ) val stream = TwitterUtils.createStream(ssc,None).window( Seconds(60) ) The stream processing can be started with the stream context start method (shown next), and the awaitTermination method indicates that it should process until stopped. So, if this code is embedded in a library-based application, it will run until the session is terminated, perhaps with a Crtl + C: ssc.start() ssc.awaitTermination() This explains what Spark Streaming is, and what it does, but it does not explain error handling, or what to do if your stream-based application fails. The next section will examine Spark Streaming error management and recovery. Errors and recovery Generally, the question that needs to be asked for your application is; is it critical that you receive and process all the data? If not, then on failure you might just be able to restart the application and discard the missing or lost data. If this is not the case, then you will need to use check pointing, which will be described in the next section. It is also worth noting that your application's error management should be robust and self-sufficient. What we mean by this is that; if an exception is non-critical, then manage the exception, perhaps log it, and continue processing. For instance, when a task reaches the maximum number of failures (specified by spark.task.maxFailures), it will terminate processing. Checkpointing It is possible to set up an HDFS-based checkpoint directory to store Apache Spark-based streaming information. In this Scala example, data will be stored in HDFS, under /data/spark/checkpoint. The following HDFS file system ls command shows that before starting, the directory does not exist: [hadoop@hc2nn stream]$ hdfs dfs -ls /data/spark/checkpoint ls: `/data/spark/checkpoint': No such file or directory The Twitter-based Scala code sample given next, starts by defining a package name for the application, and by importing Spark Streaming Context, and Twitter-based functionality. It then defines an application object named stream1: package nz.co.semtechsolutions import org.apache.spark._ import org.apache.spark.SparkContext._ import org.apache.spark.streaming._ import org.apache.spark.streaming.twitter._ import org.apache.spark.streaming.StreamingContext._ object stream1 { Next, a method is defined called createContext, which will be used to create both the spark, and streaming contexts. It will also checkpoint the stream to the HDFS-based directory using the streaming context checkpoint method, which takes a directory path as a parameter. The directory path being the value (cpDir) that was passed into the createContext method:   def createContext( cpDir : String ) : StreamingContext = { val appName = "Stream example 1" val conf = new SparkConf() conf.setAppName(appName) val sc = new SparkContext(conf) val ssc = new StreamingContext(sc, Seconds(5) ) ssc.checkpoint( cpDir ) ssc } Now, the main method is defined, as is the HDFS directory, as well as Twitter access authority and parameters. The Spark Streaming context ssc is either retrieved or created using the HDFS checkpoint directory via the StreamingContext method—getOrCreate. If the directory doesn't exist, then the previous method called createContext is called, which will create the context and checkpoint. Obviously, we have truncated our own Twitter auth.keys in this example for security reasons: def main(args: Array[String]) { val hdfsDir = "/data/spark/checkpoint" val consumerKey = "QQpxx" val consumerSecret = "0HFzxx" val accessToken = "323xx" val accessTokenSecret = "IlQxx" System.setProperty("twitter4j.oauth.consumerKey", consumerKey) System.setProperty("twitter4j.oauth.consumerSecret", consumerSecret) System.setProperty("twitter4j.oauth.accessToken", accessToken) System.setProperty("twitter4j.oauth.accessTokenSecret", accessTokenSecret) val ssc = StreamingContext.getOrCreate(hdfsDir, () => { createContext( hdfsDir ) }) val stream = TwitterUtils.createStream(ssc,None).window( Seconds(60) ) // do some processing ssc.start() ssc.awaitTermination() } // end main Having run this code, which has no actual processing, the HDFS checkpoint directory can be checked again. This time it is apparent that the checkpoint directory has been created, and the data has been stored: [hadoop@hc2nn stream]$ hdfs dfs -ls /data/spark/checkpoint Found 1 items drwxr-xr-x - hadoop supergroup 0 2015-07-02 13:41 /data/spark/checkpoint/0fc3d94e-6f53-40fb-910d-1eef044b12e9 This example, taken from the Apache Spark website, shows how checkpoint storage can be set up and used. But how often is checkpointing carried out? The metadata is stored during each stream batch. The actual data is stored with a period, which is the maximum of the batch interval, or ten seconds. This might not be ideal for you, so you can reset the value using the method: DStream.checkpoint( newRequiredInterval ) Where newRequiredInterval is the new checkpoint interval value that you require, generally you should aim for a value which is five to ten times your batch interval. Checkpointing saves both the stream batch and metadata (data about the data). If the application fails, then when it restarts, the checkpointed data is used when processing is started. The batch data that was being processed at the time of failure is reprocessed, along with the batched data since the failure. Remember to monitor the HDFS disk space being used for check pointing. In the next section, we will begin to examine the streaming sources, and will provide some examples of each type. Streaming sources We will not be able to cover all the stream types with practical examples in this section, but where this article is too small to include code, we will at least provide a description. In this article, we will cover the TCP and file streams, and the Flume, Kafka, and Twitter streams. We will start with a practical TCP-based example. This article examines stream processing architecture. For instance, what happens in cases where the stream data delivery rate exceeds the potential data processing rate? Systems like Kafka provide the possibility of solving this issue by providing the ability to use multiple data topics and consumers. TCP stream There is a possibility of using the Spark Streaming Context method called socketTextStream to stream data via TCP/IP, by specifying a hostname and a port number. The Scala-based code example in this section will receive data on port 10777 that was supplied using the Netcat Linux command. The code sample starts by defining the package name, and importing Spark, the context, and the streaming classes. The object class named stream2 is defined, as it is the main method with arguments: package nz.co.semtechsolutions import org.apache.spark._ import org.apache.spark.SparkContext._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ object stream2 { def main(args: Array[String]) { The number of arguments passed to the class is checked to ensure that it is the hostname and the port number. A Spark configuration object is created with an application name defined. The Spark and streaming contexts are then created. Then, a streaming batch time of 10 seconds is set: if ( args.length < 2 ) { System.err.println("Usage: stream2 <host> <port>") System.exit(1) } val hostname = args(0).trim val portnum = args(1).toInt val appName = "Stream example 2" val conf = new SparkConf() conf.setAppName(appName) val sc = new SparkContext(conf) val ssc = new StreamingContext(sc, Seconds(10) ) A DStream called rawDstream is created by calling the socketTextStream method of the streaming context using the host and port name parameters. val rawDstream = ssc.socketTextStream( hostname, portnum ) A top-ten word count is created from the raw stream data by splitting words by spacing. Then a (key,value) pair is created as (word,1), which is reduced by the key value, this being the word. So now, there is a list of words and their associated counts. Now, the key and value are swapped, so the list becomes (count and word). Then, a sort is done on the key, which is now the count. Finally, the top 10 items in the RDD, within the DStream, are taken and printed out: val wordCount = rawDstream .flatMap(line => line.split(" ")) .map(word => (word,1)) .reduceByKey(_+_) .map(item => item.swap) .transform(rdd => rdd.sortByKey(false)) .foreachRDD( rdd => { rdd.take(10).foreach(x=>println("List : " + x)) }) The code closes with the Spark Streaming start, and awaitTermination methods being called to start the stream processing and await process termination: ssc.start() ssc.awaitTermination() } // end main } // end stream2 The data for this application is provided, as we stated previously, by the Linux Netcat (nc) command. The Linux Cat command dumps the contents of a log file, which is piped to nc. The lk options force Netcat to listen for connections, and keep on listening if the connection is lost. This example shows that the port being used is 10777: [root@hc2nn log]# pwd /var/log [root@hc2nn log]# cat ./anaconda.storage.log | nc -lk 10777 The output from this TCP-based stream processing is shown here. The actual output is not as important as the method demonstrated. However, the data shows, as expected, a list of 10 log file words in descending count order. Note that the top word is empty because the stream was not filtered for empty words: List : (17104,) List : (2333,=) List : (1656,:) List : (1603,;) List : (1557,DEBUG) List : (564,True) List : (495,False) List : (411,None) List : (356,at) List : (335,object) This is interesting if you want to stream data using Apache Spark Streaming, based upon TCP/IP from a host and port. But what about more exotic methods? What if you wish to stream data from a messaging system, or via memory-based channels? What if you want to use some of the big data tools available today like Flume and Kafka? The next sections will examine these options, but first I will demonstrate how streams can be based upon files. Summary We could have provided streaming examples for systems like Kinesis, as well as queuing systems, but there was not room in this article. This article has provided practical examples of data recovery via checkpointing in Spark Streaming. It has also touched on the performance limitations of checkpointing and shown that that the checkpointing interval should be set at five to ten times the Spark stream batch interval. Resources for Article: Further resources on this subject: Understanding Spark RDD [article] Spark for Beginners [article] Setting up Spark [article]
Read more
  • 0
  • 0
  • 3540

article-image-chart-model-and-draggable-and-droppable-directives
Packt
06 Jul 2017
9 min read
Save for later

Chart Model and Draggable and Droppable Directives

Packt
06 Jul 2017
9 min read
In this article by Sudheer Jonna and Oleg Varaksin, the author of the book Learning Angular UI Development with PrimeNG, we will see how to work with the chart model and learn about draggable and droppable directives. (For more resources related to this topic, see here.) Working with the chart model The chart component provides a visual representation of data using chart on a web page. PrimeNG chart components are based on charts.js 2.x library (as a dependency), which is a HTML5 open source library. The chart model is based on UIChart class name and it can be represented with element name as p-chart. The chart components will work efficiently by attaching the chart model file (chart.js) to the project root folder entry point. For example, in this case it would be index.html file. It can be configured as either CDN resource, local resource, or CLI configuration. CDN resource configuration: <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.bu ndle.min.js"></script> Angular CLI configuration: "scripts": [ "../node_modules/chart.js/dist/Chart.js", //..others ] More about the chart configuration and options will be available in the official documentation of the chartJS library (http://www.chartjs.org/). Chart types The chart type is defined through the type property. It supports six types of charts with an options such as pie, bar, line, doughnut, polarArea, and radar. Each type has it's own format of data and it can be supplied through the data property. For example, in doughnut chart, the type should refer to doughnut and the data property should bind to the data options as shown here: <p-chart type="doughnut" [data]="doughnutdata"></p-chart> The component class has to define data the options with labels and datasets as follows: this.doughnutdata = { labels: ['PrimeNG', 'PrimeUI', 'PrimeReact'], datasets: [ { data: [3000, 1000, 2000], backgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ], hoverBackgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ] } ] }; Along with labels and data options, other properties related to skinning can be applied too. The legends are closable by default (that is, if you want to visualize only particular data variants then it is possible by collapsing legends which are not required). The collapsed legend is represented with a strike line. The respective data component will be disappeared after click operation on legend. Customization Each series is customized on a dataset basis but you can customize the general or common options via the options attribute. For example, the line chart which customize the default options would be as follows: <p-chart type="line" [data]="linedata" [options]="options"></p-chart> The component class needs to define chart options with customized title and legend properties as follows: this.options = { title: { display: true, text: 'PrimeNG vs PrimeUI', fontSize: 16 }, legend: { position: 'bottom' } }; As per the preceding example, the title option is customized with a dynamic title, font size, and conditional display of the title. Where as legend attribute is used to place the legend in top, left, bottom, and right positions. The default legend position is top. In this example, the legend position is bottom. The line chart with preceding customized options would results as a snapshot shown here: The Chart API also supports the couple of utility methods as shown here: refresh Redraws the graph with new data reinit Destroys the existing graph and then creates it again generateLegend Returns an HTML string of a legend for that chart Events The chart component provides a click event on data sets to process the select data using onDataSelect event callback. Let us take a line chart example with onDataSelect event callback by passing an event object as follows: <p-chart type="line" [data]="linedata" (onDataSelect)="selectData($event)"></p-chart> In the component class, an event callback is used to display selected data information in a message format as shown: selectData(event: any) { this.msgs = []; this.msgs.push({ severity: 'info', summary: 'Data Selected', 'detail': this.linedata.datasets[event.element._datasetIndex] .data[event.element._index] }); } In the preceding event callback (onDataSelect), we used an index of the dataset to display information. There are also many other options from an event object: event.element = Selected element event.dataset = Selected dataset event.element._datasetIndex = Index of the dataset in data event.element._index = Index of the data in dataset Learning Draggable and Droppable directives Drag and drop is an action, which means grabbing an object and dragging it to a different location. The components capable of being dragged and dropped enrich the web and make a solid base for modern UI patterns. The drag and drop utilities in PrimeNG allow us to create draggable and droppable user interfaces efficiently. They make it abstract for the developers to deal with the implementation details at the browser level. In this section, we will learn about pDraggable and pDroppable directives. We will introduce a DataGrid component containing some imaginary documents and make these documents draggable in order to drop them onto a recycle bin. The recycle bin is implemented as DataTable component which shows properties of dropped documents. For the purpose of better understanding the developed code, a picture comes first: This picture shows what happens after dragging and dropping three documents. The complete demo application with instructions is available on GitHub at https://github.com/ova2/angular-development-with-primeng/tree/master/chapter9/dragdrop. Draggable pDraggable is attached to an element to add a drag behavior. The value of the pDraggable attribute is required--it defines the scope to match draggables with droppables. By default, the whole element is draggable. We can restrict the draggable area by applying the dragHandle attribute. The value of dragHandle can be any CSS selector. In the DataGrid with available documents, we only made the panel's header draggable: <p-dataGrid [value]="availableDocs"> <p-header> Available Documents </p-header> <ng-template let-doc pTemplate="item"> <div class="ui-g-12 ui-md-4" pDraggable="docs" dragHandle=".uipanel- titlebar" dragEffect="move" (onDragStart)="dragStart($event, doc)" (onDragEnd)="dragEnd($event)"> <p-panel [header]="doc.title" [style]="{'text-align':'center'}"> <img src="/assets/data/images/docs/{{doc.extension}}.png"> </p-panel> </div> </ng-template> </p-dataGrid> The draggable element can fire three events when dragging process begins, proceeds, and ends. These are onDragStart, onDrag, and onDragEnd respectively. In the component class, we buffer the dragged document at the beginning and reset it at the end of the dragging process. This task is done in two callbacks: dragStart and dragEnd. class DragDropComponent { availableDocs: Document[]; deletedDocs: Document[]; draggedDoc: Document; constructor(private docService: DocumentService) { } ngOnInit() { this.deletedDocs = []; this.docService.getDocuments().subscribe((docs: Document[]) => this.availableDocs = docs); } dragStart(event: any, doc: Document) { this.draggedDoc = doc; } dragEnd(event: any) { this.draggedDoc = null; } ... } In the shown code, we used the Document interface with the following properties: interface Document { id: string; title: string; size: number; creator: string; creationDate: Date; extension: string; } In the demo application, we set the cursor to move when the mouse is moved over any panel's header. This trick provides a better visual feedback for draggable area: body .ui-panel .ui-panel-titlebar { cursor: move; } We can also set the dragEffect attribute to specifies the effect that is allowed for a drag operation. Possible values are none, copy, move, link, copyMove, copyLink, linkMove, and all. Refer the official documentation to read more details at https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer/effectAllowed. Droppable pDroppable is attached to an element to add a drop behavior. The value of the pDroppable attribute should have the same scope as pDraggable. Droppable scope can also be an array to accept multiple droppables. The droppable element can fire four events. Event name Description onDragEnter Invoked when a draggable element enters the drop area onDragOver Invoked when a draggable element is being dragged over the drop area onDrop Invoked when a draggable is dropped onto the drop area onDragLeave Invoked when a draggable element leaves the drop area In the demo application, the whole code of the droppable area looks as follows: <div pDroppable="docs" (onDrop)="drop($event)" [ngClass]="{'dragged-doc': draggedDoc}"> <p-dataTable [value]="deletedDocs"> <p-header>Recycle Bin</p-header> <p-column field="title" header="Title"></p-column> <p-column field="size" header="Size (bytes)"></p-column> <p-column field="creator" header="Creator"></p-column> <p-column field="creationDate" header="Creation Date"> <ng-template let-col let-doc="rowData" pTemplate="body"> {{doc[col.field].toLocaleDateString()}} </ng-template> </p-column> </p-dataTable> </div> Whenever a document is being dragged and dropped into the recycle bin, the dropped document is removed from the list of all available documents and added to the list of deleted documents. This happens in the onDrop callback: drop(event: any) { if (this.draggedDoc) { // add draggable element to the deleted documents list this.deletedDocs = [...this.deletedDocs, this.draggedDoc]; // remove draggable element from the available documents list this.availableDocs = this.availableDocs.filter((e: Document) => e.id !== this.draggedDoc.id); this.draggedDoc = null; } } Both available and deleted documents are updated by creating new arrays instead of manipulating existing arrays. This is necessary in data iteration components to force Angular run change detection. Manipulating existing arrays would not run change detection and the UI would not be updated. The Recycle Bin area gets a red border while dragging any panel with document. We have achieved this highlighting by setting ngClass as follows: [ngClass]="{'dragged-doc': draggedDoc}". The style class dragged-doc is enabled when the draggedDoc object is set. The style class is defined as follows: .dragged-doc { border: solid 2px red; } Summary Initially we started with chart components. At first we started with chart Model API and then will learn how to create charts programmatically using various chart types such as pie, bar, line, doughnut, polar and radar charts. We also learned features of Draggable and Droppable. Resources for Article: Further resources on this subject: Building Components Using Angular [article] Get Familiar with Angular [article] Writing a Blog Application with Node.js and AngularJS [article]
Read more
  • 0
  • 0
  • 4150

article-image-exposure-rxjava
Packt
06 Jul 2017
10 min read
Save for later

Exposure to RxJava

Packt
06 Jul 2017
10 min read
In this article by Thomas Nield, the author of the book Learning RxJava, we will cover a quick exposure to RxJava, which is a Java VM implementation of ReactiveX (Reactive Extensions): a library for composing asynchronous and event-based programs by using observable sequences. (For more resources related to this topic, see here.) It is assumed you are fairly comfortable with Java and know how to use classes, interfaces, methods, properties, variables, static/nonstatic scopes, and collections. If you have not done concurrency or multithreading, that is okay. RxJava makes these advanced topics much more accessible. Have your favorite Java development environment ready, whether it is Intellij IDEA, Eclipse, NetBeans, or any other environment of your choosing. Recommended that you have a build automation system as well such as Gradle or Maven, which we will walk through shortly. History of ReactiveX and RxJava As developers, we tend to train ourselves to think in counter-intuitive ways. Modeling our world with code has never been short of challenges. It was not long ago that object-oriented programming was seen as the silver bullet to solve this problem. Making blueprints of what we interact with in real life was a revolutionary idea, and this core concept of classes and objects still impacts how we code today. However, business and user demands continued to grow in complexity. As 2010 approached, it became clear that object-oriented programming only solved part of the problem. Classes and objects do a great job representing an entity with properties and methods, but they become messy when they need to interact with each other in increasingly complex (and often unplanned) ways. Decoupling patterns and paradigms emerged, but this yielded an unwanted side effect of growing amounts of boilerplate code. In response to these problems, functional programming began to make a comeback not to replace object-oriented programming but rather complement it and fill this void. Reactive programming, a functional event-driven programming approach, began to receive special attention.A couple of reactive frameworks emerged ultimately, including Akka and Sodium. But at Microsoft, a computer scientist named Erik Meijer created a reactive programming framework for .NET called Reactive Extensions. In a matter of years, Reactive Extensions (also called ReactiveX or Rx) was ported to several languages and platforms, including JavaScript, Python, C++, Swift, and Java, of course. ReactiveX quickly emerged as a cross-language standard to bring reactive programming into the industry. RxJava, the ReactiveX port for Java, was created in large part by Ben Christensen from Netflix and David Karnok. RxJava 1.0 was released in November 2014, followed by RxJava 2.0 in November 2016. RxJava is the backbone to other ReactiveX JVM ports, such as RxScala, RxKotlin, and RxGroovy. It has become a core technology for Android development and has also found its way into Java backend development. Many RxJava adapter libraries, such as RxAndroid , RxJava-JDBC , RxNetty , and RxJavaFX adapted several Java frameworks to become reactive and work with RxJava out-of-the-box.This all shows that RxJava is more than a library. It is part of a greater ReactiveX ecosystem that represents an entire approach to programming. The fundamental idea of ReactiveX is that events are data and data are events. This is a powerful concept that we will explore, but first, let's step back and look at the world through the reactive lens. Thinking reactively Suspend everything you know about Java (and programming in general) for a moment, and let's make some observations about our world. These may sound like obvious statements, but as developers, we can easily overlook them. Bring your attention to the fact that everything is in motion. Traffic, weather, people, conversations, financial transactions, and so on are all moving. Technically, even something stationary as a rock is in motion due to the earth's rotation and orbit. When you consider the possibility that everything can be modeled as in motion, you may find it a bit overwhelming as a developer. Another observation to note is that these different events are happening concurrently. Multiple activities are happening at the same time. Sometimes, they act independently, but other times, they can converge at some point to interact. For instance, a car can drive with no impact on a person jogging. They are two separate streams of events. However, they may converge at some point and the car will stop when it encounters the jogger. If this is how our world works, why do we not model our code this way?. Why do we not model code as multiple concurrent streams of events or data happening at the same time? It is not uncommon for developers to spend more time managing the states of objects and doing it in an imperative and sequential manner. You may structure your code to execute Process 1, Process 2, and then Process 3, which depends on Process 1 and Process 2. Why not kick-off Process 1 and Process 2 simultaneously, and then the completion of these two events immediately kicks-off Process 3? Of course, you can use callbacks and Java concurrency tools, but RxJava makes this much easier and safer to express. Let's make one last observation. A book or music CD is static. A book is an unchanging sequence of words and a CD is a collection of tracks. There is nothing dynamic about them. However, when we read a book, we are reading each word one at a time. Those words are effectively put in motion as a stream being consumed by our eyes. It is no different with a music CD track, where each track is put in motion as sound waves and your ears are consuming each track. Static items can, in fact, be put in motion too. This is an abstract but powerful idea because we made each of these static items a series of events. When we level the playing field between data and events by treating them both the same, we unleash the power of functional programming and unlock abilities you previously might have thought impractical. The fundamental idea behind reactive programming is that events are data and data are events. This may seem abstract, but it really does not take long to grasp when you consider our real-world examples. The runner and car both have properties and states, but they are also in motion. The book and CD are put in motion when they are consumed. Merging the event and data to become one allows the code to feel organic and representative of the world we are modeling. Why should I learn RxJava?  ReactiveX and RxJava paints a broad stroke against many problems programmers face daily, allowing you to express business logic and spend less time engineering code. Have you ever struggled with concurrency, event handling, obsolete data states, and exception recovery? What about making your code more maintainable, reusable, and evolvable so it can keep up with your business? It might be presumptuous to call reactive programming a silver bullet to these problems, but it certainly is a progressive leap in addressing them. There is also growing user demand to make applications real time and responsive. Reactive programming allows you to quickly analyze and work with live data sources such as Twitter feeds or stock prices. It can also cancel and redirect work, scale with concurrency, and cope with rapidly emitting data. Composing events and data as streams that can be mixed, merged, filtered, split, and transformed opens up radically effective ways to compose and evolve code. In summary, reactive programming makes many hard tasks easy, enabling you to add value in ways you might have thought impractical earlier. If you have a process written reactively and you discover that you need to run part of it on a different thread, you can implement this change in a matter of seconds. If you find network connectivity issues crashing your application intermittently, you can gracefully use reactive recovery strategies that wait and try again. If you need to inject an operation in the middle of your process, it is as simple as inserting a new operator. Reactive programming is broken up into modular chain links that can be added or removed, which can help overcome all the aforementioned problems quickly. In essence, RxJava allows applications to be tactical and evolvable while maintaining stability in production. A quick exposure to RxJava  Before we dive deep into the reactive world of RxJava, here is a quick exposure to get your feet wet first. In ReactiveX, the core type you will work with is the Observable. We will be learning more about the Observable. But essentially, an Observable pushes things. A given Observable<T>pushes things of type T through a series of operators until it arrives at an Observer that consumes the items. For instance, create a new Launcher.java file in your project and put in the following code: import io.reactivex.Observable; public class Launcher { public static void main(String[] args) { Observable<String> myStrings = Observable.just("Alpha", "Beta", "Gamma", "Delta", "Epsilon"); } } In our main() method,  we have an Observable<String>that will push five string objects. An Observable can push data or events from virtually any source, whether it is a database query or live Twitter feeds. In this case, we are quickly creating an Observable using Observable.just(), which will emit a fixed set of items. However, running this main() method is not going to do anything other than declare Observable<String>. To make this Observable actually push these five strings (which are called emissions), we need an Observer to subscribe to it and receive the items. We can quickly create and connect an Observer by passing a lambda expression that specifies what to do with each string it receives: import io.reactivex.Observable; public class Launcher { public static void main(String[] args) { Observable<String> myStrings = Observable.just("Alpha", "Beta", "Gamma", "Delta", "Epsilon"); myStrings.subscribe(s -> System.out.println(s)); } }  When we run this code, we should get the following output: Alpha Beta Gamma Delta Epsilon What happened here is that our Observable<String> pushed each string object one at a time to our Observer, which we shorthanded using the lambda expression s -> System.out.println(s). We pass each string through the parameter s (which I arbitrarily named) and instructed it to print each one. Lambdas are essentially mini functions that allow us to quickly pass instructions on what action to take with each incoming item. Everything to the left of the arrow -> are arguments (which in this case is a string we named s), and everything to the right is the action (which is System.out.println(s)). Summary So in this article, we learned how to look at the world in a reactive way. As a developer, you may have to retrain yourself from a traditional imperative mindset and develop a reactive one. Especially if you have done imperative, object-oriented programming for a long time, this can be challenging. But the return on investment will be significant as your applications will become more maintainable, scalable, and evolvable. You will also have faster turn around and more legible code. We also got a brief introduction to reactive code and how Observable work through push-based iteration. You will hopefully find reactive programming intuitive and easy to reason with. I hope you find that RxJava not only makes you more productive, but also helps you take on tasks you hesitated to do earlier. So let's get started! Resources for Article: Further resources on this subject: Understanding the Basics of RxJava [article] Filtering a sequence [article] An Introduction to Reactive Programming [article]
Read more
  • 0
  • 0
  • 2379

article-image-kvm-networking-libvirt
Packt
05 Jul 2017
10 min read
Save for later

KVM Networking with libvirt

Packt
05 Jul 2017
10 min read
In this article by Konstantin Ivanov, author of the book KVM Virtualization Cookbook, we are going to deploy three different network types, explore the network XML format and see examples on how to define and manipulate virtual interfaces for the KVM instances. (For more resources related to this topic, see here.) To be able to connect the virtual machines to the host OS, or to each other, we are going to use the Linux bridge and the Open vSwitch (OVS) daemons, userspace tools and kernel modules. Both software bridging technologies are great at creating software defined networks (SDN) of various complexity, in a consistent and easy to manipulate manner. The Linux bridge and OVS both act as a bridge/switch that the virtual interfaces of the KVM guests can connect to. With all this in mind, lets start by learning more about the software bridges in Linux. The Linux bridge The Linux bridge is a software Layer 2 device that provides some of the functionality of a physical bridge device. It can forward frames between KVM guests, the host OS and virtual machines running on other servers, or networks. The Linux bridge consists of two components - a userspace administration tool that we are going to use in this recipe and a kernel module, that performs all the work of connecting multiple Ethernet segments together. Each software bridge we create can have a number of ports attached to it, where network traffic is forwarded to and from. When creating KVM instances we can attach the virtual interfaces that are associated with them to the bridge, which is similar to plugging a network cable from a physical server's Network Interface Card (NIC) to a bridge/switch device. Being a Layer 2 device, the Linux bridge works with MAC addresses and maintains a kernel structure to keep track of ports and associated MAC addresses in the form of a Content Addressable Memory (CAM) table. In this recipe we are going to create a new Linux bridge and use the brctl utility to manipulate it. Getting Ready For this recipe we are going to need the following: Recent Linux kernel with enabled 802.1d Ethernet Bridging options. To check if your kernel is compiled with those features, or exposed as kernel modules, run: root@kvm:~# cat /boot/config-`uname -r` | grep -i bridg # PC-card bridges CONFIG_BRIDGE_NETFILTER=y CONFIG_NF_TABLES_BRIDGE=m CONFIG_BRIDGE_EBT_BROUTE=m CONFIG_BRIDGE_EBT_T_FILTER=m CONFIG_BRIDGE_EBT_T_NAT=m CONFIG_BRIDGE_EBT_802_3=m CONFIG_BRIDGE_EBT_AMONG=m CONFIG_BRIDGE_EBT_ARP=m CONFIG_BRIDGE_EBT_IP=m CONFIG_BRIDGE_EBT_IP6=m CONFIG_BRIDGE_EBT_LIMIT=m CONFIG_BRIDGE_EBT_MARK=m CONFIG_BRIDGE_EBT_PKTTYPE=m CONFIG_BRIDGE_EBT_STP=m CONFIG_BRIDGE_EBT_VLAN=m CONFIG_BRIDGE_EBT_ARPREPLY=m CONFIG_BRIDGE_EBT_DNAT=m CONFIG_BRIDGE_EBT_MARK_T=m CONFIG_BRIDGE_EBT_REDIRECT=m CONFIG_BRIDGE_EBT_SNAT=m CONFIG_BRIDGE_EBT_LOG=m # CONFIG_BRIDGE_EBT_ULOG is not set CONFIG_BRIDGE_EBT_NFLOG=m CONFIG_BRIDGE=m CONFIG_BRIDGE_IGMP_SNOOPING=y CONFIG_BRIDGE_VLAN_FILTERING=y CONFIG_SSB_B43_PCI_BRIDGE=y CONFIG_DVB_DDBRIDGE=m CONFIG_EDAC_SBRIDGE=m # VME Bridge Drivers root@kvm:~# The bridge kernel module. To verify that the module is loaded and to obtain more information about its version and features, execute: root@kvm:~# lsmod | grep bridge bridge 110925 0 stp 12976 2 garp,bridge llc 14552 3 stp,garp,bridge root@kvm:~# modinfo bridge filename: /lib/modules/3.13.0-107-generic/kernel/net/bridge/bridge.ko alias: rtnl-link-bridge version: 2.3 license: GPL srcversion: 49D4B615F0B11CA696D8623 depends: stp,llc intree: Y vermagic: 3.13.0-107-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: E1:07:B2:8D:F0:77:39:2F:D6:2D:FD:D7:92:BF:3B:1D:BD:57:0C:D8 sig_hashalgo: sha512 root@kvm:~# The bridge-utils package, that provides the tool to create and manipulate the Linux bridge. The ability to create new KVM guests using libvirt, or the QEMU utilities How to do it... Install the Linux bridge package, if it is not already present: root@kvm:~# apt install bridge-utils Build a new KVM instance using the raw image: root@kvm:~# virt-install --name kvm1 --ram 1024 --disk path=/tmp/debian.img,format=raw --graphics vnc,listen=146.20.141.158 --noautoconsole --hvm --import Starting install... Creating domain... | 0 B 00:00 Domain creation completed. You can restart your domain by running: virsh --connect qemu:///system start kvm1 root@kvm:~# List all available bridge devices: root@kvm:~# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.fe5400559bd6 yes vnet0 root@kvm:~# Bring the virtual bridge down, delete it and ensure it's been deleted: root@kvm:~# ifconfig virbr0 down root@kvm:~# brctl delbr virbr0 root@kvm:~# brctl show bridge name bridge id STP enabled interfaces root@kvm:~# Create a new bridge and bring it up: root@kvm:~# brctl addbr virbr0 root@kvm:~# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 no root@kvm:~# ifconfig virbr0 up root@kvm:~# Assign an IP address to the bridge: root@kvm:~# ip addr add 192.168.122.1 dev virbr0 root@kvm:~# ip addr show virbr0 39: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 32:7d:3f:80:d7:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/32 scope global virbr0 valid_lft forever preferred_lft forever inet6 fe80::307d:3fff:fe80:d7c6/64 scope link valid_lft forever preferred_lft forever root@kvm:~# List the virtual interfaces on the host OS: root@kvm:~# ip a s | grep vnet 38: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500 root@kvm:~# Add the virtual interface vnet0 to the bridge: root@kvm:~# brctl addif virbr0 vnet0 root@kvm:~# brctl show virbr0 bridge name bridge id STP enabled interfaces virbr0 8000.fe5400559bd6 no vnet0 root@kvm:~# Enable the Spanning Tree Protocol (STP) on the bridge and obtain more information: root@kvm:~# brctl stp virbr0 on root@kvm:~# brctl showstp virbr0 virbr0 bridge id 8000.fe5400559bd6 designated root 8000.fe5400559bd6 root port 0 path cost 0 max age 20.00 bridge max age 20.00 hello time 2.00 bridge hello time 2.00 forward delay 15.00 bridge forward delay 15.00 ageing time 300.00 hello timer 0.26 tcn timer 0.00 topology change timer 0.00 gc timer 90.89 flags vnet0 (1) port id 8001 state forwarding designated root 8000.fe5400559bd6 path cost 100 designated bridge 8000.fe5400559bd6 message age timer 0.00 designated port 8001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags root@kvm:~# Test connectivity from the KVM instance to the host OS: root@kvm:~# virsh console kvm1 Connected to domain kvm1 Escape character is ^] root@debian:~# ip a s eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:55:9b:d6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.92/24 brd 192.168.122.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe55:9bd6/64 scope link valid_lft forever preferred_lft forever root@debian:~# root@debian:~# ping 192.168.122.1 -c 3 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.276 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.226 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.259 ms --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.226/0.253/0.276/0.027 ms root@debian:~# How it works... When we first installed and started the libvirt daemon, few things happened automatically: A new Linux bridge was created with the name and IP address defined in the /etc/libvirt/qemu/networks/default.xml configuration file. The dnsmasq service was started with a configuration specified in the /var/lib/libvirt/dnsmasq/default.conf file. Lets examine the default libvirt bridge configuration: root@kvm:~# cat /etc/libvirt/qemu/networks/default.xml <network> <name>default</name> <bridge name="virbr0"/> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254"/> </dhcp> </ip> </network> root@kvm:~# This is the default network that libvirt created for us, specifying the bridge name, IP address and the IP range used by the DHCP server that was started. We are going to talk about libvirt networking in much more details later in this article, however we are showing it here to help you understand where all the IP addresses and the bridge name came from.We can see that a DHCP server is running on the host OS and its configuration file, by running: root@kvm:~# pgrep -lfa dnsmasq 38983 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf root@kvm:~# cat /var/lib/libvirt/dnsmasq/default.conf ##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE ##OVERWRITTEN AND LOST. Changes to this configuration should be made using: ## virsh net-edit default ## or other application using the libvirt API. ## ## dnsmasq conf file created by libvirt strict-order user=libvirt-dnsmasq pid-file=/var/run/libvirt/network/default.pid except-interface=lo bind-dynamic interface=virbr0 dhcp-range=192.168.122.2,192.168.122.254 dhcp-no-override dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases dhcp-lease-max=253 dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts root@kvm:~# From the preceding configuration file, notice how the IP address range for the DHCP service and the name of the virtual bridge match what is configured in the default libvirt network file that we just saw. With all this in mind let's step through all the actions we performed earlier: In step 1, we installed the userspace tool brctl that we use to create, configure and inspect the Linux bridge configuration in the Linux kernel. In step 2, we provisioned a new KVM instance using a custom raw image containing the guest OS. In step 3, we invoked the bridge utility to list all available bridge devices. From the output we can observe that currently there's one bridge, named virbr0, which libvirt created automatically. Notice that under the interfaces column we can see the vnet0 interface. This is the virtual NIC that was exposed to the host OS, when we started the KVM instance. This means that the virtual machine is connected to the host bridge. In step 4, we first bring the bridge down in order to delete it, then we use the brctl command again to remove the bridge and ensure it's not present on the host OS. In step 5, we recreated the bridge and brought it back up. We do this to demonstrate the steps required to create a new bridge. In step 6, we re-assigned the same IP address to the bridge and listed it. In steps 7 and 8 we list all virtual interfaces on the host OS. Since we only have one KVM guest currently running on the server, we only see one virtual interface - vnet0. We then proceed to add/connect the virtual NIC to the bridge. In step 9, we enabled the Spanning Tree Protocol (STP) on the bridge. STP is a layer 2 protocol that helps prevent network loops if we have redundant network paths. This is especially useful in larger, more complex network topologies, where multiple bridges are connected together. Finally, in step 10, we connect to the KVM guest using the console, list its interface configuration and ensure we can ping the bridge on the host OS. Notice that the IP address of the guest OS was automatically assigned by the dnsmasq server running on the host, as it is a part of the IP range defined in the configuration file we saw earlier. Summary We have covered the Linux bridge in detail here. We have deployed three different network types, explored the network XML format and have seen examples on how to define and manipulate virtual interfaces for the KVM instances. Resources for Article: Further resources on this subject: A Virtual Machine for a Virtual World [article] Getting Started with Ansible [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 14406
article-image-azure-feature-pack
Packt
05 Jul 2017
9 min read
Save for later

Azure Feature Pack

Packt
05 Jul 2017
9 min read
In this article by Christian Cote, Matija Lah, and Dejan Sarka, the author of the book SQL Server 2016 Integration Services Cookbook, we will see how to install Azure Feature Pack that in turn, will install Azure control flow tasks and data flow components And we will also see how to use the Fuzzy Lookup transformation for identity mapping. (For more resources related to this topic, see here.) In the early years of SQL Server, Microsoft introduced a tool to help developers and database administrator (DBA) to interact with the data: data transformation services (DTS). The tool was very primitive compared to SSIS and it was mostly relying on ActiveX and TSQL to transform the data. SSIS 1.0 appears in 2005. The tool was a game changer in the ETL world at the time. It was a professional and (pretty much) reliable tool for 2005. 2008/2008R2 versions were much the same as 2005 in a sense that they didn't add much functionality but they made the tool more scalable. In 2012, Microsoft enhanced SSIS in many ways. They rewrote the package XML to ease source control integration and make package code easier to read. They also greatly enhanced the way packages are deployed by using a SSIS catalog in SQL Server. Having the catalog in SQL Server gives us execution reports and many views that give us access to metadata or metaprocess information's in our projects. Version 2014 didn't have anything for SSIS. Version 2016 brings other set of features as you will see. We now also have the possibility to integrate with big data. Business intelligence projects many times reveal previously unseen issues with the quality of the source data. Dealing with data quality includes data quality assessment, or data profiling, data cleansing, and maintaining high quality over time. In SSIS, the data profiling task helps you with finding unclean data. The Data Profiling task is not like the other tasks in SSIS because it is not intended to be run over and over again through a scheduled operation. Think about SSIS being the wrapper for this tool. You use the SSIS framework to configure and run the Data Profiling task, and then you observe the results through the separate Data Profile Viewer. The output of the Data Profiling task will be used to help you in your development and design of the ETL and dimensional structures in your solution. Periodically, you may want to rerun the Data Profile task to see how the data has changed, but the package you develop will not include the task in the overall recurring ETL process Azure tasks and transforms Azure ecosystem is becoming predominant in Microsoft ecosystem and SSIS has not been left over in the past few years. The Azure Feature Pack is not a SSIS 2016 specific feature. It's also available for SSIS version 2012 and 2014. It's worth mentioning that it appeared in July 2015, a few months before SSIS 2016 release. Getting ready This section assumes that you have installed SQL Server Data Tools 2015. How to do it... We'll start SQL Server Data Tools, and open the CustomLogging project if not already done: In the SSIS toolbox, scroll to the Azure group. Since the Azure tools are not installed with SSDT, the Azure group is disabled in the toolbox. The tools must be downloaded using a separate installer. Click on Azure group to expand it and click on Download Azure Feature Pack as shown in the following screenshot: Your default browser opens and the Microsoft SQL Server 2016 Integration Services Feature Pack for Azure opens. Click on Download as shown in the following screenshot: From the popup that appears, select both 32-bit and 64-bit version. The 32-bit version is necessary for SSIS package development since SSDT is a 32-bit program. Click Next as shown in the following screenshot: As shown in the following screenshot, the files are downloaded: Once the download completes, run on one the installers downloaded. The following screen appears. In that case, the 32-bit (x86) version is being installed. Click Next to start the installation process: As shown in the following screenshot, check the box near I accept the terms in the License Agreement and click Next. Then the installation starts. The following screen appears once the installation is completed. Click Finish to close the screen: Install the other feature pack you downloaded. If SSDT is opened, close it. Start SSDT again and open the CustomLogging project. In the Azure group in the SSIS toolbox, you should now see the Azure tasks as in the following screenshot: Using SSIS fuzzy components SSIS includes two really sophisticated matching transformations in the data flow. The Fuzzy Lookup transformation is used for mapping the identities. The Fuzzy Grouping Transformation is used for de-duplicating. Both of them use the same algorithm for comparing the strings and other data. Identity mapping and de-duplication are actually the same problem. For example, instead for mapping the identities of entities in two tables, you can union all of the data in a single table and then do the de-duplication. Or vice versa, you can join a table to itself and then do identity mapping instead of de-duplication. Getting ready This recipe assumes that you have successfully finished the previous recipe. How to do it… In SSMS, create a new table in the DQS_STAGING_DATA database in the dbo schema and name it dbo.FuzzyMatchingResults. Use the following code: CREATE TABLE dbo.FuzzyMatchingResults ( CustomerKey INT NOT NULL PRIMARY KEY, FullName NVARCHAR(200) NULL, StreetAddress NVARCHAR(200) NULL, Updated INT NULL, CleanCustomerKey INT NULL ); Switch to SSDT. Continue editing the DataMatching package. Add a Fuzzy Lookup transformation below the NoMatch Multicast transformation. Rename it FuzzyMatches and connect it to the NoMatch Multicast transformation with the regular data flow path. Double-click the transformation to open its editor. On the Reference Table tab, select the connection manager you want to use to connect to your DQS_STAGING_DATA database and select the dbo.CustomersClean table. Do not store a new index or use an existing index. When the package executes the transformation for the first time, it copies the reference table, adds a key with an integer datatype to the new table, and builds an index on the key column. Next, the transformation builds an index, called a match index, on the copy of the reference table. The match index stores the results of tokenizing the values in the transformation input columns. The transformation then uses these tokens in the lookup operation. The match index is a table in a SQL Server database. When the package runs again, the transformation can either use an existing match index or create a new index. If the reference table is static, the package can avoid the potentially expensive process of rebuilding the index for repeat sessions of data cleansing. Click the Columns tab. Delete the mapping between the two CustomerKey columns. Clear the check box next to the CleanCustomerKey input column. Select the check box next to the CustomerKey lookup column. Rename the output alias for this column to CleanCustomerKey. You are replacing the original column with the one retrieved during the lookup. Your mappings should resemble those shown in the following screenshot: Click the Advanced tab. Raise the Similarity threshold to 0.50 to reduce the matching search space. With similarity threshold of 0.00, you would get a full cross join. Click OK. Drag the Union All transformation below the Fuzzy Lookup transformation. Connect it to an output of the Match Multicast transformation and an output of the FuzzyMatches Fuzzy Lookup transformation. You will combine the exact and approximate matches in a single row set. Drag an OLE DB Destination below the Union All transformation. Rename it FuzzyMatchingResults and connect it with the Union All transformation. Double-click it to open the editor. Connect to your DQS_STAGING_DATA database and select the dbo.FuzzyMatchingResults table. Click the Mappings tab. Click OK. The completed data flow is shown in the following screenshot: You need to add restartability to your package. You will truncate all destination tables. Click the Control Flow tab. Drag the Execute T-SQL Statement task above the data flow task. Connect the tasks with the green precedence constraint from the Execute T-SQL Statement task to the data flow task. The Execute T-SQL Statement task must finish successfully before the data flow task starts. Double-click the Execute T-SQL Statement task. Use the connection manager to your DQS_STAGING_DATA database. Enter the following code in the T-SQL statement textbox, and then click OK: TRUNCATE TABLE dbo.CustomersDirtyMatch; TRUNCATE TABLE dbo.CustomersDirtyNoMatch; TRUNCATE TABLE dbo.FuzzyMatchingResults; Save the solution. Execute your package in debug mode to test it. Review the results of the Fuzzy Lookup transformation in SSMS. Look for rows for which the transformation did not find a match, and for any incorrect matches. Use the following code: -- Not matched SELECT * FROM FuzzyMatchingResults WHERE CleanCustomerKey IS NULL; -- Incorrect matches SELECT * FROM FuzzyMatchingResults WHERE CleanCustomerKey <> CustomerKey * (-1); You can use the following code to clean up the AdventureWorksDW2014 and DQS_STAGING_DATA databases: USE AdventureWorksDW2014; DROP TABLE IF EXISTS dbo.Chapter05Profiling; DROP TABLE IF EXISTS dbo.AWCitiesStatesCountries; USE DQS_STAGING_DATA; DROP TABLE IF EXISTS dbo.CustomersCh05; DROP TABLE IF EXISTS dbo.CustomersCh05DQS; DROP TABLE IF EXISTS dbo.CustomersClean; DROP TABLE IF EXISTS dbo.CustomersDirty; DROP TABLE IF EXISTS dbo.CustomersDirtyMatch; DROP TABLE IF EXISTS dbo.CustomersDirtyNoMatch; DROP TABLE IF EXISTS dbo.CustomersDQSMatch; DROP TABLE IF EXISTS dbo.DQSMatchingResults; DROP TABLE IF EXISTS dbo.DQSSurvivorshipResults; DROP TABLE IF EXISTS dbo.FuzzyMatchingResults; When you are done, close SSMS and SSDT. SQL Server data quality services (DQS) is a knowledge-driven data quality solution. This means that it requires you to maintain one or more knowledge bases (KBs). In a KB, you maintain all knowledge related to a specific portion of data—for example, customer data. The idea of data quality services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, and so on. You don't throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. Summary We have seen how to install Azure Feature Pack, Azure control flow tasks and data flow components, and Fuzzy Lookup transformation. Resources for Article: Further resources on this subject: Building A Recommendation System with Azure [article] Introduction to Microsoft Azure Cloud Services [article] Windows Azure Service Bus: Key Features [article]
Read more
  • 0
  • 0
  • 3455

article-image-planning-and-preparation
Packt
05 Jul 2017
9 min read
Save for later

Planning and Preparation

Packt
05 Jul 2017
9 min read
In this article by Jason Beltrame, authors of the book Penetration Testing Bootcamp, Proper planning and preparation is key to a successful penetration test. It is definitely not as exciting as some of the tasks we will do within the penetration test later, but it will lay the foundation of the penetration test. There are a lot of moving parts to a penetration test, and you need to make sure that you stay on the correct path and know just how far you can and should go. The last thing you want to do in a penetration test is cause a customer outage because you took down their application server with an exploit test (unless, of course, they want us to get to that depth) or scanned the wrong network. Performing any of these actions would cause our penetration-testing career to be a rather short-lived career. In this article, following topics will be covered: Why does penetration testing take place? Building the systems for the penetration test Penetration system software setup (For more resources related to this topic, see here.) Why does penetration testing take place? There are many reasons why penetration tests happen. Sometimes, a company may want to have a stronger understanding of their security footprint. Sometimes, they may have a compliance requirement that they have to meet. Either way, understanding why penetration testing is happening will help you understand the goal of the company. Plus, it will also let you know whether you are performing an internal penetration test or an external penetration test. External penetration tests will follow the flow of an external user and see what they have access to and what they can do with that access. Internal penetration tests are designed to test internal systems, so typically, the penetration box will have full access to that environment, being able to test all software and systems for known vulnerabilities. Since tests have different objectives, we need to treat them differently; therefore, our tools and methodologies will be different. Understanding the engagement One of the first tasks you need to complete prior to starting a penetration test is to have a meeting with the stakeholders and discuss various data points concerning the upcoming penetration test. This meeting could be you as an external entity performing a penetration test for a client or you as an internal security employee doing the test for your own company. The important element here is that the meeting should happen either way, and the same type of information needs to be discussed. During the scoping meeting, the goal is to discuss various items of the penetration test so that you have not only everything you need, but also full management buy-in with clearly defined objectives and deliverables. Full management buy-in is a key component for a successful penetration test. Without it, you may have trouble getting required information from certain teams, scope creep, or general pushback. Building the systems for the penetration test With a clear understanding of expectations, deliverables, and scope, it is now time to start working on getting our penetration systems ready to go. For the hardware, I will be utilizing a decently powered laptop. The laptop specifications are a Macbook Pro with 16 GB of RAM, 256 GB SSD, and a quad-core 2.3 Ghz Intel i7 running VMware Fusion. I will also be using the Raspberry Pi 3. The Raspberry Pi 3 is a 1.2 Ghz ARMv8 64-bit Quad Core, with 1GB of RAM and a 32 GB microSD. Obviously, there is quite a power discrepancy between the laptop and the Raspberry Pi. That is okay though, because I will be using both these devices differently. Any task that requires any sort of processing power will be done on the laptop. I love using the Raspberry Pi because of its small form factor and flexibility. It can be placed in just about any location we need, and if needed, it can be easily concealed. For software, I will be using Kali Linux as my operating system of choice. Kali is a security-oriented Linux distribution that contains a bunch of security tools already installed. Its predecessor, Backtrack, was also a very popular security operating system. One of the benefits of Kali Linux is that it is also available for the Raspberry Pi, which is perfect in our circumstance. This way, we can have a consistent platform between devices we plan to use in our penetration-testing labs. Kali Linux can be downloaded from their site at https://www.kali.org. For the Raspberry Pi, the Kali images are managed by Offensive Security at https://www.offensive-security.com. Even though I am using Kali Linux as my software platform of choice, feel free to use whichever software platform you feel most comfortable with. We will be using a bunch of open source tools for testing. A lot of these tools are available for other distributions and operating systems. Penetration system software setup Setting up Kali Linux on both systems is a bit different since they are different platforms. We won't be diving into a lot of details on the install, but we will be hitting all the major points. This is the process you can use to get the software up and running. We will start with the installation on the Raspberry Pi: Download the images from Offensive Security at https://www.offensive-security.com/kali-linux-arm-images/. Open the Terminal app on OS X. Using the utility xz, you can decompress the Kali image that was downloaded: xz-dkali-2.1.2-rpi2.img.xz Next, you insert the USB microSD card reader with the microSD card into the laptop and verify the disks that are installed so that you know the correct disk to put the Kali image on: diskutillist Once you know the correct disk, you can unmount the disk to prepare to write to it: diskutilunmountDisk/dev/disk2 Now that you have the correct disk unmounted, you will want to write the image to it using the dd command. This process can take some time, so if you want to check on the progress, you can run the Ctrl + T command anytime: sudoddif=kali-2.1.2-rpi2.imgof=/dev/disk2bs=1m Since the image is now written to the microSD drive, you can eject it with the following command: diskutileject/dev/disk2 You then remove the USB microSD card reader, place the microSD card in the Raspberry Pi, and boot it up for the first time. The default login credentials are as follows: Username:root Password:toor You then change the default password on the Raspberry Pi to make sure no one can get into it with the following command: Passwd<INSERTPASSWORDHERE> Making sure the software is up to date is important for any system, especially a secure penetration-testing system. You can accomplish this with the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade After a reboot, you are ready to go on the Raspberry Pi. Next, it's onto setting up the Kali Linux install on the Mac. Since you will be installing Kali as a VM within Fusion, the process will vary compared to another hypervisor or installing on a bare metal system. For me, I like having the flexibility of having OS X running so that I can run commands on there as well: Similar to the Raspberry Pi setup, you need to download the image. You will do that directly via the Kali website. They offer virtual images for downloads as well. If you go to select these, you will be redirected to the Offensive Security site at https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/. Now that you have the Kali Linux image downloaded, you need to extract the VMDK. We used 7z via CLI to accomplish this task: Since the VMDK is ready to import now, you will need to go into VMware Fusion and navigate to File | New. A screen similar to the following should be displayed: Click on Create a custom virtual machine. You can select the OS as Other | Other and click on Continue: Now, you will need to import the previously decompressed VMDK. Click on the Use an existing virtual disk radio button, and hit Choose virtual disk. Browse the VMDK. Click on Continue. Then, on the last screen, click on the Finish button. The disk should now start to copy. Give it a few minutes to complete: Once completed, the Kali VM will now boot. Log in with the credentials we used in the Raspberry Pi image: Username:root Password:toor You need to then change the default password that was set to make sure no one can get into it. Open up a terminal within the Kali Linux VM and use the following command: Passwd<INSERTPASSWORDHERE> Make sure the software is up to date, like you did for the Raspberry Pi. To accomplish this, you can use the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade Once this is complete, the laptop VM is ready to go. Summary Now that we have reached the end of this article, we should have everything that we need for the penetration test. Having had the scoping meeting with all the stakeholders, we were able to get answers to all the questions that we required. Once we completed the planning portion, we moved onto the preparation phase. In this case, the preparation phase involved setting up Kali Linux on both the Raspberry Pi as well as setting it up as a VM on the laptop. We went through the steps of installing and updating the software on each platform as well as some basic administrative tasks. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 1920

article-image-network-evidence-collection
Packt
05 Jul 2017
16 min read
Save for later

Network Evidence Collection

Packt
05 Jul 2017
16 min read
In this article by Gerard Johansen, author of the book Digital Forensics and Incident Response, explains that the traditional focus of digital forensics has been to locate evidence on the host hard drive. Law enforcement officers interested in criminal activity such as fraud or child exploitation can find the vast majority of evidence required for prosecution on a single hard drive. In the realm of Incident Response though, it is critical that the focus goes far beyond a suspected compromised system. There is a wealth of information to be obtained within the points along the flow of traffic from a compromised host to an external Command and Control server for example. (For more resources related to this topic, see here.) This article focuses on the preparation, identification and collection of evidence that is commonly found among network devices and along the traffic routes within an internal network. This collection is critical during an incident where an external threat sources is in the process of commanding internal systems or is in the process of pilfering data out of the network. Network based evidence is also useful when examining host evidence as it provides a second source of event corroboration which is extremely useful in determining the root cause of an incident. Preparation The ability to acquire network-based evidence is largely dependent on the preparations that are untaken by an organization prior to an incident. Without some critical components of a proper infrastructure security program, key pieces of evidence will not be available for incident responders in a timely manner. The result is that evidence may be lost as the CSIRT members hunt down critical pieces of information. In terms of preparation, organizations can aid the CSIRT by having proper network documentation, up to date configurations of network devices and a central log management solution in place. Aside from the technical preparation for network evidence collection, CSIRT personnel need to be aware of any legal or regulatory issues in regards to collecting network evidence. CSIRT personnel need to be aware that capturing network traffic can be considered an invasion of privacy absent any other policy. Therefore, the legal representative of the CSIRT should ensure that all employees of the organization understand that their use of the information system can be monitored. This should be expressly stated in policies prior to any evidence collection that may take place. Network diagram To identify potential sources of evidence, incident responders need to have a solid understanding of what the internal network infrastructure looks like. One method that can be employed by organizations is to create and maintain an up to date network diagram. This diagram should be detailed enough so that incident responders can identify individual network components such as switches, routers or wireless access points. This diagram should also contain internal IP addresses so that incident responders can immediately access those systems through remote methods. For instance, examine the below simple network diagram: This diagram allows for a quick identification of potential evidence sources. In the above diagram, for example, suppose that the laptop connected to the switch at 192.168.2.1 is identified as communicating with a known malware Command and Control server. A CSIRT analyst could examine the network diagram and ascertain that the C2 traffic would have to traverse several network hardware components on its way out of the internal network. For example, there would be traffic traversing the switch at 192.168.10.1, through the firewall at 192.168.0.1 and finally the router out to the Internet. Configuration Determining if an attacker has made modifications to a network device such as a switch or a router can be made easier if the CSIRT has a standard configuration immediately available. Organizations should already have configurations for network devices stored for Disaster Recovery purposes but should have these available for CSIRT members in the event that there is an incident. Logs and log management The lifeblood of a good incident investigation is evidence from a wide range of sources. Even something as a malware infection on a host system requires corroboration among a variety of sources. One common challenge with Incident Response, especially in smaller networks is how the organization handles log management. For a comprehensive investigation, incident response analysts need access to as much network data as possible. All to often, organizations do not dedicate the proper resources to enabling the comprehensive logs from network devices and other systems. Prior to any incident, it is critical to clearly define the how and what an organization will log and as well as how it will maintain those logs. This should be established within a log management policy and associated procedure. The CSIRT personnel should be involved in any discussion as what logs are necessary or not as they will often have insight into the value of one log source over another. NIST has published a short guide to log management available at: http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-92.pdf. Aside from the technical issues regarding log management, there are legal issues that must be addressed. The following are some issues that should be addressed by the CSIRT and its legal support prior to any incident. Establish logging as a normal business practice: Depending on the type of business and the jurisdiction, users may have a reasonable expectation of privacy absent any expressly stated monitoring policy. In addition, if logs are enabled strictly to determine a user's potential malicious activity, there may be legal issues. As a result, the logging policy should establish that logging of network activity is part of the normal business activity and that users do not have a reasonable expectation of privacy. Logging as close to the event: This is not so much an issue with automated logging as they are often created almost as the event occurs. From an evidentiary standpoint, logs that are not created close to the event lose their value as evidence in a courtroom. Knowledgeable Personnel: The value of logs is often dependent on who created the entry and whether or not they were knowledgeable about the event. In the case of logs from network devices, the logging software addresses this issue. As long as the software can be demonstrated to be functioning properly, there should be no issue. Comprehensive Logging: Enterprise logging should be configured for as much of the enterprise as possible. In addition, logging should be consistent. A pattern of logging that is random will have less value in a court than a consistent patter of logging across the entire enterprise. Qualified Custodian: The logging policy should name a Data Custodian. This individual would speak to the logging and the types of software utilized to create the logs. They would also be responsible for testifying to the accuracy of the logs and the logging software used. Document Failures: Prolonged failures or a history of failures in the logging of events may diminish their value in a courtroom. It is imperative that any logging failure be documented and a reason is associated with such failure. Log File Discovery: Organizations should be made aware that logs utilized within a courtroom proceeding are going to be made available to opposing legal counsel. Logs from compromised systems: Logs that originate from a known compromised system are suspect. In the event that these logs are to be introduced as evidence, the custodian or incident responder will often have to testify at length concerning the veracity of the data contained within the logs. Original copies are preferred: Log files can be copied from the log source to media. As a further step, any logs should be archived off the system as well. Incident responders should establish a chain of custody for each log file used throughout the incident and these logs maintained as part of the case until an order from the court is obtained allowing their destruction. Network device evidence There are a number of log sources that can provide CSIRT personnel and incident responders with good information. A range of manufacturers provides each of these network devices. As a preparation task, CSIRT personnel should become familiar on how to access these devices and obtain the necessary evidence: Switches: These are spread throughout a network through a combination of core switches that handle traffic from a range of network segments to edge switches which handle the traffic for individual segments. As a result, traffic that originates on a host and travels out the internal network will traverse a number of switches. Switches have two key points of evidence that should be addressed by incident responders. First is the Content Addressable Memory (CAM) table. This CAM table maps the physical ports on the switch to the Network Interface Card (NIC) on each device connected to the switch. Incident responders in tracing connections to specific network jacks can utilize this information. This can aid in the identification of possible rogue devices. The second way switches can aid in an incident investigation is through facilitating network traffic capture. Routers: Routers allow organizations to connect multiple LANs into either Metropolitan Area Networks or Wide Area Networks. As a result, the handled an extensive amount of traffic. The key piece of evidentiary information that routers contain is the routing table. This table holds the information for specific physical ports that map to the networks. Routers can also be configured to deny specific traffic between networks and maintain logs on allowed traffic and data flow. Firewalls: Firewalls have changed significantly since the days when they were considered just a different type of router. Next generation firewalls contain a wide variety of features such as Intrusion Detection and Prevention, Web filtering, Data Loss Prevention and detailed logs about allowed and denied traffic. Firewalls often times serve as the detection mechanism that alerts security personnel to potential incidents. Incident responders should have as much visibility into how their organization's firewalls function and what data can be obtained prior to an incident. Network Intrusion Detection and Prevention systems: These systems were purposefully designed to provide security personnel and incident responders with information concerning potential malicious activity on the network infrastructure. These systems utilize a combination of network monitoring and rule sets to determine if there is malicious activity. Intrusion Detection Systems are often configured to alert to specific malicious activity while Intrusion Prevention Systems can detection but also block potential malicious activity. In either case, both types of platforms logs are an excellent place for incident responders to locate specific evidence on malicious activity. Web Proxy Servers: Organization often utilize Web Proxy Servers to control how users interact with websites and other internet based resources. As a result, these devices can give an enterprise wide picture of web traffic that both originates and is destined for internal hosts. Web proxies also have the additional feature set of alerting to connections to known malware Command and Control (C2) servers or websites that serve up malware. A review of web proxy logs in conjunction with a possible compromised host may identify a source of malicious traffic or a C2 server exerting control over the host. Domain Controllers / Authentication Servers: Serving the entire network domain, authentication servers are the primary location that incident responders can leverage for details on successful or unsuccessful logins, credential manipulation or other credential use. DHCP Server: Maintaining a list of assigned IP addresses to workstations or laptops within the organization requires an inordinate amount of upkeep. The use of Dynamic Host Configuration Protocol allows for the dynamic assignment of IP addresses to systems on the LAN. The DHCP servers often contain logs on the assignment of IP addresses mapped to the MAC address of the hosts NIC. This becomes important if an incident responder has to track down a specific workstation or laptop that was connected to the network at a specific data and time. Application Servers: A wide range of applications from Email to Web Applications is housed on network servers. Each of these can provide logs specific to the type of application. Network devices such as switches, routers and firewalls also have their own internal logs that maintain data on access and changes. Incident responders should become familiar with the types of network devices on their organization's network and also be able to access these logs in the event of an incident. Security information and Event management system A significant challenge that a great many organizations has is the nature of logging on network devices. With limited space, log files are often rolled over where the new log files are written over older log files. The result is that in some cases, an organization may only have a few days or even a few hours of important logs. If a potential incident happened several weeks ago, the incident response personnel will be without critical pieces of evidence. One tool that has been embraced by a number of enterprises is a Security Information and Event Management (SIEM) System. These appliances have the ability to aggregate log and event data from network sources and combine them into a single location. This allows the CSIRT and other security personnel to observe activity across the entire network without having to examine individual systems. The diagram below illustrates how a SIEM integrates into the overall network: A variety of sources from security controls to SQL databases are configured to send logs to the SIEM. In this case, the SQL database located at 10.100.20.18 indicates that the user account USSalesSyncAcct was utilized to copy a database to the remote host located at 10.88.6.12. The SIEM allows for quick examination of this type of activity. For example, if it is determined that the account USSalesSyncAcct had been compromised, CSIRT analysts can quickly query the SIEM for any usage of that account. From there, they would be able to see the log entry that indicated a copy of a database to the remote host. Without that SIEM, CSIRT analysts would have to search each individual system that might have been accessed, a process that may be prohibitive. From the SIEM platform, security and network analysts have the ability to perform a number of different tasks related to Incident Response: Log Aggregation: Typical enterprises have several thousand devices within the internal network, each with their own logs; the SIEM can be deployed to aggregate these logs in a central location. Log Retention: Another key feature that SIEM platforms provide is a platform to retain logs. Compliance frameworks such as the Payment Card Industry Data Security Standard (PCI-DSS) stipulate that logs should be maintained for a period of one year with 90 days immediately available. SIEM platforms can aid with log management by providing a system that archives logs in an orderly fashion and allows for the immediate retrieval. Routine Analysis: It is advisable with a SIEM platform to conduct period reviews of the information. SIEM platforms often provide a dashboard that highlights key elements such as the number of connections, data flow, and any critical alerts. SIEMs also allow for reporting so that stakeholders can keep informed of activity. Alerting: SIEM platforms have the ability to alert to specific conditions that may indicate malicious activity. This can include alerting from security controls such as anti-virus, Intrusion Prevention or Detection Systems. Another key feature of SIEM platforms is event correlation. This technique examines the log files and determines if there is a link or any commonality in the events. The SIEM then has the capability to alert on these types of events. For example, if a user account attempts multiple logins across a number of systems in the enterprise, the SIEM can identify that activity and alert to it. Incident Response: As the SIEM becomes the single point for log aggregation and analysis; CSIRT analysts will often make use of the SIEM during an incident. CSIRT analysis will often make queries on the platform as well as download logs for offline analysis. Because of the centralization of log files, the time to conduct searches and event collection is significantly reduced. For example, a CSIRT analysis has indicated a user account has been compromised. Without a SIEM, the CSIRT analyst would have to check various systems for any activity pertaining to that user account. With a SIEM in place, the analyst simply conducts a search of that user account on the SIEM platform, which has aggregated user account activity, logs from systems all over the enterprise. The result is the analyst has a clear idea of the user account activity in a fraction of the time it would have taken to examine logs from various systems throughout the enterprise. SIEM platforms do entail a good deal of time and money to purchase and implement. Adding to that cost is the constant upkeep, maintenance and modification to rules that is necessary. From an Incident Response perspective though, a properly configured and maintained SIEM is vital to gathering network-based evidence in a timely manner. In addition, the features and capability of SIEM platforms can significantly reduce the time it takes to determine a root cause of an incident once it has been detected. The following article has an excellent breakdown and use cases of SIEM platforms in enterprise environments: https://gbhackers.com/security-information-and-event-management-siem-a-detailed-explanation/. Security onion Full-featured SIEM platforms may be cost prohibitive for some organizations. One option that is available is the open source platform Security Onion. The Security Onion ties a wide range of security tools such as OSSEC, Suricata, and Snort into a single platform. Security Onion also has features such as dashboards and tools for deep analysis of log files. For example, the following screenshot shows the level of detail available: Although installing and deploying the Security Onion may require some resources in time, it is a powerful low cost alternative providing a solution to organizations that cannot deploy a full-featured SIEM solution. (The Security Onion platform and associated documentation is available at https://securityonion.net/). Summary Evidence that is pertinent to incident responders is not just located on the hard drive of a compromised host. There is a wealth of information available from network devices spread throughout the environment. With proper preparation, a CSIRT may be able to leverage the evidence provided by these devices through solutions such as a SIEM. CSIRT personnel also have the ability to capture the network traffic for later analysis through a variety of methods and tools. Behind all of these techniques though, is the legal and policy implications that CSIRT personnel and the organization at large needs to navigate. By preparing for the legal and technical challenges of network evidence collection, CSIRT members can leverage this evidence and move closer to the goal of determining the root cause of an incident and bringing the organization back up to operations. Resources for Article: Further resources on this subject: Selecting and Analyzing Digital Evidence [article] Digital and Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 11131
article-image-opendaylight-fundamentals
Packt
05 Jul 2017
14 min read
Save for later

OpenDaylight Fundamentals

Packt
05 Jul 2017
14 min read
In this article by Jamie Goodyear, Mathieu Lemay, Rashmi Pujar, Yrineu Rodrigues, Mohamed El-Serngawy, and Alexis de Talhouët the authors of the book OpenDaylight Cookbook, we will be covering the following recipes: Connecting OpenFlow switches Mounting a NETCONF device Browsing data models with Yang UI (For more resources related to this topic, see here.) OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux Foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV). Connecting OpenFlow switches OpenFlow is a vendor-neutral standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlow plugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications. The OpenFlow southbound plugin currently provides the following components: Flow management Group management Meter management Statistics polling Let's connect an OpenFlow switch to OpenDaylight. Getting ready This recipe requires an OpenFlow switch. If you don't have any, you can use a mininet-vm with OvS installed. You can download mininet-vm from the website: https://github.com/mininet/mininet/wiki/Mininet-VM-Images. Any version should work The following recipe will be presented using a mininet-vm with OvS 2.0.2. How to do it... Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch: opendaylight-user@root>feature:install odl-openflowplugin-all It might take a minute or so to complete the installation. Connect an OpenFlow switch to OpenDaylight.we will use mininet-vm as our OpenFlow switch as this VM runs an instance of OpenVSwitch:     Login to mininet-vm using:  Username: mininet   Password: mininet    Let's create a bridge: mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0 Now let's connect OpenDaylight as the controller of br0: mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633    Let's look at our topology: mininet@mininet-vm:~$ sudo ovs-vsctl show 0b8ed0aa-67ac-4405-af13-70249a7e8a96 Bridge "br0" Controller "tcp: ${CONTROLLER_IP}:6633" is_connected: true Port "br0" Interface "br0" type: internal ovs_version: "2.0.2" ${CONTROLLER_IP} is the IP address of the host running OpenDaylight. We're establishing a TCP connection. Have a look at the created OpenFlow node.Once the OpenFlow switch is connected, send the following request to get information regarding the switch:    Type: GET    Headers:Authorization: Basic YWRtaW46YWRtaW4=    URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/ This will list all the nodes under opendaylight-inventory subtree of MD-SAL that store OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information the OpenFlow switch has, including its tables, its ports, flow statistics, and so on. How it works... Once the feature is installed, OpenDaylight is listening to connection on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way. Mounting a NETCONF device The OpenDaylight component responsible to connect remote NETCONF devices is called the NETCONF southbound plugin aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational datastore and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices. The netconf-connector currently supports the RFC-6241, RFC-5277 and RFC-6022. The following recipe will explain how to connect a NETCONF device to OpenDaylight. Getting ready This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository: https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/netconf/netconf-testtool/1.0.4-Beryllium-SR4/netconf-testtool-1.0.4-Beryllium-SR4-executable.jar How to do it... Start OpenDaylight karaf distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an NETCONF device: opendaylight-user@root>feature:install odl-netconf-topology odl-restconf It might take a minute or so to complete the installation. Start your NETCONF device.If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command: $ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 This will simulate one device that will be bound to port 17830. Configure a new netconf-connectorSend the following request using RESTCONF:    Type: PUT    URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-deviceBy looking closer at the URL, you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload. Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= Payload: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> </node> Let's have a closer look at this payload:    node-id: Defines the name of the netconf-connector.    address: Defines the IP address of the NETCONF device.    port: Defines the port for the NETCONF session.    username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.    password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.    tcp-only: Defines whether or not the NETCONF session should use tcp or ssl. If set to true it will use tcp. This is the default configuration of the netconf-connector; it actually has more configurable elements that will be present in a second part. Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.First, you could look at the log to see if any error occurred. If no error has occurred, you will see: 2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully Once the new netconf-connector is created, some useful metadata are written into the MD-SAL's operational datastore under the network-topology subtree. To retrieve this information, you should send the following request: Type: GET Headers: Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step. This request will provide information about the connection status and device capabilities. The device capabilities are all the yang models the NETCONF device is providing in its hello-message that was used to create the schema context. More configuration for the netconf-connectorAs mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them. schema-cache-directory: This corresponds to the destination schema repository for yang files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/. reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listens for netconf-capability-change notification. Default value is false. connection-timeout-millis: Timeout in milliseconds after which the connection must be established. Default value is 20000 milliseconds.  default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. Default value is 60000 milliseconds.  max-connection-attempts: Maximum number of connection attempts. Non-positive or null value is interpreted as infinity. Default value is 0, which means it will retry forever.  between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. Default value is 2000 milliseconds.  sleep-factor: Back-off factor used to increase the delay between connection attempt(s). Default value is 1.5.  keepalive-delay: Netconf-connector sends keep alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep alive RPC in seconds. Providing a 0 value will disable this mechanism. Default value is 120 seconds. Using this configuration, your payload would look like this: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> <schema-cache-directory >new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema >false</reconnect-on-changed-schema> <connection-timeout-millis >20000</connection-timeout-millis> <default-request-timeout-millis >60000</default-request-timeout-millis> <max-connection-attempts >0</max-connection-attempts> <between-attempts-timeout-millis >2000</between-attempts-timeout-millis> <sleep-factor >1.5</sleep-factor> <keepalive-delay >120</keepalive-delay> </node> How it works... Once the request to connect a new NETCONF device is sent, OpenDaylight will setup the communication channel, used for managing, interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device. At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons: The NETCONF device provided a capability in its hello-message but hasn't provided the schema. ODL failed to mount a given schema due to YANG violation(s). OpenDaylight parses YANG models as per as the RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability. If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure. There's more... Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device. Get datastore To see the data contained in the device datastore, use the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration datastore. If you want to see the operational one, replace config by operational in the URL. If your device defines yang model, you can access its data using the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container> The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this: …/ yang-ext:mount/<module>:<container>/<sub-container> Invoke RPC In order to invoke an RPC on the remote device, you should use the following request: Type: POST Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation> This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call. Delete a netconf-connector Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request: Type: DELETE Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device By looking closer to the URL, you can see that we are removing the NETCONF node-id new-netconf-device. Browsing data models with Yang UI Yang UI is a user interface application through which one can navigate among all yang models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven datastore. It provides a nice, smooth user interface making it easier to browse through the model(s). This recipe will guide you through those functionalities. Getting ready This recipe only requires the OpenDaylight controller and a web-browser. How to do it... Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible to pull in all dependencies needed to use Yang UI: opendaylight-user@root>feature:install odl-dlux-yangui It might take a minute or so to complete the installation. Navigate to http://localhost:8181/index.html#/yangui/index.Username: admin Password: admin Once logged in, all modules will be loading until you can see this message at the bottom of the screen: Loading completed successfully You should see the API tab listing all yang models in the following format: <module-name> rev.<revision-date> For instance:     cluster-admin rev.2015-10-13     config rev.2013-04-05     credential-store rev.2015-02-26 By default, there isn't much you can do with the provided yang models. So, let's connect an OpenFlow switch to better understand how to use this Yang UI. Once done, refresh your web page to load newly added modules. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config datastore. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.You can either copy the request to the clipboard to use it on your browser, send it, show a preview of it, or define a custom API request. For now, we will only send the request. You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen. You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search. How it works... OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL datastore. YangUI is a representation of this datastore. Each schema represents a subtree based on the name of the module and its revision-date. YangUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE. There's more… The example shown previously can be improved on, as there was no user yang model loaded. For instance, if you mount a NETCONF device containing its own yang model, you could interact with it through YangUI. You would use the config datastore to push/update some data, and you would see the operational datastore updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL. See also Using API doc as a REST API client. Summary Throughout this article, we learned recipes such as connecting OpenFlow switches, mounting a NETCONF device, browsing data models with Yang UI. Resources for Article: Further resources on this subject: Introduction to SDN - Transformation from legacy to SDN [article] The OpenFlow Controllers [article] Introduction to SDN - Transformation from legacy to SDN [article]
Read more
  • 0
  • 0
  • 5582

article-image-sql-server-basics
Packt
05 Jul 2017
14 min read
Save for later

SQL Server basics

Packt
05 Jul 2017
14 min read
In this article by Jasmin Azemović, author of the book SQL Server 2017 for Linux, we will cover basic a overview of SQL server and learn about backup. Linux, or to be precise GNU/Linux, is one of the best alternatives to Windows; and in many cases, it is the first choice of environment for daily tasks such as system administration, running different kinds of services, or just a tool for desktop application Linux's native working interface is the command line. Yes, KDE and GNOME are great graphic user interfaces. From a user's perspective, clicking is much easier than typing; but this observation is relative. GUI is something that changed the perception of modern IT and computer usage. Some tasks are very difficult without a mouse, but not impossible. On the other hand, command line is something where you can solve some tasks quicker, more efficiently, and better than in GUI. You don't believe me? Imagine these situations and try to implement them through your favorite GUI tool: In a folder of 1000 files, copy only those the names of which start with A and end with Z, .txt extension Rename 100 files at the same time Redirect console output to the file There are many such examples; in each of them, Command Prompt is superior—Linux Bash, even more. Microsoft SQL Server is considered to be one the most commonly used systems for database management in the world. This popularity has been gained by high degree of stability, security, and business intelligence and integration functionality. Microsoft SQL Server for Linux is a database server that accepts queries from clients, evaluates them and then internally executes them, to deliver results to the client. The client is an application that produces queries, through a database provider and communication protocol sends requests to the server, and retrieves the result for client side processing and/or presentation. (For more resources related to this topic, see here.) Overview of SQL Server When writing queries, it's important to understand that the interaction between the tool of choice and the database based on client-server architecture, and the processes that are involved. It's also important to understand which components are available and what functionality they provide. With a broader understanding of the full product and its components and tools, you'll be able to make better use of its functionality, and also benefit from using the right tool for specific jobs. Client-server architecture concepts In a client-server architecture, the client is described as a user and/or device, and the server as a provider of some kind of service. SQL Server client-server communication As you can see in the preceding figure, the client is represented as a machine, but in reality can be anything. Custom application (desktop, mobile, web) Administration tool (SQL Server Management Studio, dbForge, sqlcmd…) Development environment (Visual Studio, KDevelop…) SQL Server Components Microsoft SQL Server consists of many different components to serve a variety of organizational needs of their data platform. Some of these are: Database Engine is the relational database management system (RDBMS), which hosts databases and processes queries to return results of structured, semi-structured, and non-structured data in online transactional processing solutions (OLTP). Analysis Services is the online analytical processing engine (OLAP) as well as the data mining engine. OLAP is a way of building multi-dimensional data structures for fast and dynamic analysis of large amounts of data, allowing users to navigate hierarchies and dimensions to reach granular and aggregated results to achieve a comprehensive understanding of business values. Data mining is a set of tools used to predict and analyse trends in data behaviour and much more. Integration Services supports the need to extract data from sources, transform it, and load it in destinations (ETL) by providing a central platform that distributes and adjusts large amounts of data between heterogeneous data destinations. Reporting Services is a central platform for delivery of structured data reports and offers a standardized, universal data model for information workers to retrieve data and model reports without the need of understanding the underlying data structures. Data Quality Services (DQS) is used to perform a variety data cleaning, correction and data quality tasks, based on knowledge base. DQS is mostly used in ETL process before loading DW. R services (advanced analytics) is a new service that actually incorporate powerful R language for advanced statistic analytics. It is part of database engine and you can combine classic SQL code with R scripts. While writing this book, only one service was actually available in SQL Server for Linux and its database engine. This will change in the future and you can expect more services to be available. How it works on Linux? SQL Server is a product with a 30-year-long history of development. We are speaking about millions of lines of code on a single operating system (Windows). The logical question is how Microsoft successfully ports those millions of lines of code to the Linux platform so fast. SQL Server@Linux, officially became public in the autumn of 2016. This process would take years of development and investment. Fortunately, it was not so hard. From version 2005, SQL Server database engine has a platform layer called SQL Operating system (SOS). It is a setup between SQL Server engine and the Windows operating systems. The main purpose of SOS is to minimize the number of system calls by letting SQL Server deal with its own resources. It greatly improves performance, stability and debugging process. On the other hand, it is platform dependent and does not provide an abstraction layer. That was the first big problem for even start thinking to make Linux version. Project Drawbridge is a Microsoft research project created to minimize virtualization resources when a host runs many VM on the same physical machine. The technical explanation goes beyond the scope of this book (https://www.microsoft.com/en-us/research/project/drawbridge/). Drawbridge brings us to the solution of the problem. Linux solution uses a hybrid approach, which combines SOS and Liberty OS from Drawbridge project to create SQL PAL (SQL Platform Abstraction Layer). This approach creates a set of SOS API calls which does not require Win32 or NT calls and separate them from platform depended code. This is a dramatically reduced process of rewriting SQL Server from its native environment to a Linux platform. This figure gives you a high-level overview of SQL PAL( https://blogs.technet.microsoft.com/dataplatforminsider/2016/12/16/sql-server-on-linux-how-introduction/). SQL PAL architecture Retrieving and filtering data Databases are one of the cornerstones of modern business companies. Data retrieval is usually made with SELECT statement and is therefore very important that you are familiar with this part of your journey. Retrieved data is often not organized in the way you want them to be, so they require additional formatting. Besides formatting, accessing very large amount of data requires you to take into account the speed and manner of query execution which can have a major impact on system performance Databases usually consist of many tables where all data are stored. Table names clearly describe entities whose data are stored inside and therefore if you need to create a list of new products or a list of customers who had the most orders, you need to retrieve those data by creating a query. A query is an inquiry into the database by using the SELECT statement which is the first and most fundamental SQL statement that we are going to introduce in this chapter. SELECT statement consists of a set of clauses that specifies which data will be included into query result set. All clauses of SQL statements are the keywords and because of that will be written in capital letters. Syntactically correct SELECT statement requires a mandatory FROM clause which specifies the source of the data you want to retrieve. Besides mandatory clauses, there are a few optional ones that can be used to filter and organize data: INTO enables you to insert data (retrieved by the SELECT clause) into a different table. It is mostly used to create table backup. WHERE places conditions on a query and eliminates rows that would be returned by a query without any conditions. ORDER BY displays the query result in either ascending or descending alphabetical order. GROUP BY provides mechanism for arranging identical data into groups. HAVING allows you to create selection criteria at the group level. SQL Server recovery models When it comes to the database, backup is something that you should consider and reconsider really carefully. Mistakes can cost you: money, users, data and time and I don't know which one has bigger consequences. Backup and restore are elements of a much wider picture known by the name of disaster recovery and it is science itself. But, from the database perspective and usual administration task these two operations are the foundation for everything else. Before you even think about your backups, you need to understand recovery models that SQL Server internally uses while the database is in operational mode. Recovery model is about maintaining data in the event of a server failure. Also, it defines amount of information that SQL Server writes in log file with purpose of recovery. SQL Server has three database recovery models: Simple recovery model Full recovery model Bulk-logged recovery model Simple recovery model This model is typically used for small databases and scenarios were data changes are infrequent. It is limited to restoring the database to the point when the last backup was created. It means that all changes made after the backup are gone. You will need to recreate all changes manually. Major benefit of this model is that it takes small amount of storage space for log file. How to use it and when, depends on business scenarios. Full recovery model This model is recommended when recovery from damaged storage is the highest priority and data loss should be minimal. SQL Server uses copies of database and log files to restore database. Database engine logs all changes to the database including bulk operation and most DDL commands. If the transaction log file is not damaged, SQL Server can recover all data except transaction which are in process at the time of failure (not committed in to database file). All logged transactions give you an opportunity of point in time recovery, which is a really cool feature. Major limitation of this model is the large size of the log files which leads you to performance and storage issues. Use it only in scenarios where every insert is important and loss of data is not an option. Bulk-logged recovery model This model is somewhere in the middle of simple and full. It uses database and log backups to recreate database. Comparing to full recovery model, it uses less log space for: CREATE INDEX and bulk load operations such as SELECT INTO. Let's look at this example. SELECT INTO can load a table with 1, 000, 000 records with a single statement. The log will only record occurrence of this operations but details. This approach uses less storage space comparing to full recovery model. Bulk-logged recovery model is good for databases which are used to ETL process and data migrations. SQL Server has system database model. This database is the template for each new one you create. If you use just CREATE DATABASE statement without any additional parameters it simply copies model database with all properties and metadata. It also inherits default recovery model which is full. So, conclusion is that each new database will be in full recovery mode. This can be changed during and after creation process. Elements of backup strategy Good backup strategy is not just about creating a backup. This is a process of many elements and conditions that should be filed to achieve final goal and this is the most efficient backup strategy plan. To create a good strategy, we need to answer the following questions: Who can create backups? Backup media Types of backups Who can create backups? Let's say that SQL Server user needs to be a member of security role which is authorized to execute backup operations. They are members of: sysadmin server role Every user with sysadmin permission can work with backups. Our default sa user is a member of the sysadmin role. db_owner database role Every user who can create databases by default can execute any backup/restore operations. db_backupoperator database role Some time you need just a person(s) to deal with every aspect of backup operation. This is common for large-scale organizations with tens or even hundreds of SQL Server instances. In those environments, backup is not trivial business. Backup media An important decision is where to story backup files and how to organize while backup files and devices. SQL Server gives you a large set of combinations to define your own backup media strategy. Before we explain how to store backups, let's stop for a minute and describe the following terms: Backup disk is a hard disk or another storage device that contains backup files. Back file is just ordinary file on the top of file system. Media set is a collection of backup media in ordered way and fixed type (example: three type devices, Tape1, Tape2, and Tape3). Physical backup device can be a disk file of tape drive. You will need to provide information to SQL Server about your backup device. A backup file that is created before it is used for a backup operation is called a backup device. Figure Backup devices The simplest way to store and handle database backups is by using a back disk and storing them as regular operating system files, usually with the extension .bak. Linux does not care much about extension, but it is good practice to mark those files with something obvious. This chapter will explain how to use backup disk devices because every reader of this book should have a hard disk with an installation of SQL Server on Linux; hope so! Tapes and media sets are used for large-scale database operations such as enterprise-class business (banks, government institutions and so on). Disk backup devices can anything such as a simple hard disk drive, SSD disk, hot-swap disk, USB drive and so on. The size of the disk determines the maximum size of the database backup file. It is recommended that you use a different disk as backup disk. Using this approach, you will separate database data and log disks. Imagine this. Database files and backup are on the same device. If that device fails, your perfect backup strategy will fall like a tower of cards. Don't do this. Always separate them. Some serious disaster recovery strategies (backup is only smart part of it) suggest using different geographic locations. This makes sense. A natural disaster or something else of that scale can knock down the business if you can't restore your system from a secondary location in a reasonably small amount of time. Summary Backup and restore is not something that you can leave aside. It requires serious analyzing and planning, and SQL Server gives you powerful backup types and options to create your disaster recovery policy on SQL Server on Linux. Now you can do additional research and expand your knowledge A database typically contains dozens of tables, and therefore it is extremely important that you master creating queries over multiple tables. This implies the knowledge of the functioning JOIN operators with a combination with elements of string manipulation. Resources for Article: Further resources on this subject: Review of SQL Server Features for Developers [article] Configuring a MySQL linked server on SQL Server 2008 [article] Exception Handling in MySQL for Python [article]
Read more
  • 0
  • 0
  • 4905