Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-running-your-applications-aws
Cheryl Adams
17 Aug 2016
4 min read
Save for later

Running Your Applications with AWS

Cheryl Adams
17 Aug 2016
4 min read
If you’ve ever been told not to run with scissors, you should not have the same concern when running with AWS. It is neither dangerous nor unsafe when you know what you are doing and where to look when you don’t. Amazon’s current service offering, AWS (Amazon Web Services), is a collection of services, applications and tools that can be used to deploy your infrastructure and application environment to the cloud.  Amazon gives you the option to start their service offerings with a ‘free tier’ and then move toward a pay as you go model.  We will highlight a few of the features when you open your account with AWS. One of the first things you will notice is that Amazon offers a bulk of information regarding cloud computing right up front. Whether you are a novice, amateur or an expert in cloud computing, Amazon offers documented information before you create your account.  This type of information is essential if you are exploring this tool for a project or doing some self-study on your own. If you are a pre-existing Amazon customer, you can use your same account to get started with AWS. If you want to keep your personal account separate from your development or business, it would be best to create a separate account. Amazon Web Services Landing Page The Free Tier is one of the most attractive features of AWS. As a new account you are entitled to twelve months within the Free Tier. In addition to this span of time, there are services that can continue after the free tier is over. This gives the user ample time to explore the offerings within this free-tier period. The caution is not to exceed the free service limitations as it will incur charges. Setting up the free-tier still requires a credit card. Fee-based services will be offered throughout the free tier, so it is important not to select a fee-based charge unless you are ready to start paying for it. Actual paid use will vary based on what you have selected.   AWS Service and Offerings (shown on an open account)     AWS overview of services available on the landing page Amazon’s service list is very robust. If you are already considering AWS, hopefully this means you are aware of what you need or at least what you would like to use. If not, this would be a good time to press pause and look at some resource-based materials. Before the clock starts ticking on your free-tier, I would recommend a slow walk through the introductory information on this site to ensure that you are selecting the right mix of services before creating your account. Amazon’s technical resources has a 10-minute tutorial that gives you a complete overview of the services. Topics like ‘AWS Training and Introduction’ and ‘Get Started with AWS’ include a list of 10-minute videos as well as a short list of ‘how to’ instructions for some of the more commonly used features. If you are a techie by trade or hobby, this may be something you want to dive into immediately.In a company, generally there is a predefined need or issue that the organization may feel can be resolved by the cloud.  If it is a team initiative, it would be good to review the resources mentioned in this article so that everyone is on the same page as to what this solution can do.It’s recommended before you start any trial, subscription or new service that you have a set goal or expectation of why you are doing it. Simply stated, a cloud solution is not the perfect solution for everyone.  There is so much information here on the AWS site. It’s also great if you are comparing between competing cloud service vendors in the same space. You will be able to do a complete assessment of most services within the free-tier. You can map use case scenarios to determine if AWS is the right fit for your project. AWS First Project is a great place to get started if you are new to AWS. If you are wondering how to get started, these technical resources will set you in the right direction. By reviewing this information during your setup or before you start, you will be able to make good use out of your first few months and your introduction to AWS. About the author Cheryl Adams is a senior cloud data and infrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 1709

article-image-go-programming-control-flow
Packt
10 Aug 2016
13 min read
Save for later

Go Programming Control Flow

Packt
10 Aug 2016
13 min read
In this article by Vladimir Vivien author of the book Learning Go programming explains some basic control flow of Go programming language. Go borrows several of the control flow syntax from its C-family of languages. It supports all of the expected control structures including if-else, switch, for-loop, and even goto. Conspicuously absent though are while or do-while statements. The following topics examine Go's control flow elements. Some of which you may already be familiar and others that bring new set of functionalities not found in other languages. The if statement The switch statement The type Switch (For more resources related to this topic, see here.) The If Statement The if-statement, in Go, borrows its basic structural form from other C-like languages. The statement conditionally executes a code-block when the Boolean expression that follows the if keyword which evaluates to true as illustrated in the following abbreviated program that displays information about the world currencies. import "fmt" type Currency struct { Name string Country string Number int } var CAD = Currency{ Name: "Canadian Dollar", Country: "Canada", Number: 124} var FJD = Currency{ Name: "Fiji Dollar", Country: "Fiji", Number: 242} var JMD = Currency{ Name: "Jamaican Dollar", Country: "Jamaica", Number: 388} var USD = Currency{ Name: "US Dollar", Country: "USA", Number: 840} func main() { num0 := 242 if num0 > 100 || num0 < 900 { mt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } } func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } The if statement in Go looks similar to other languages. However, it sheds a few syntactic rules while enforcing new ones. The parentheses, around the test expression, are not necessary. While the following if-statement will compile, it is not idiomatic: if (num0 > 100 || num0 < 900) { fmt.Println("Currency: ", num0) printCurr(num0) } Use instead: if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } The curly braces for the code block are always required. The following snippet will not compile: if num0 > 100 || num0 < 900 printCurr(num0) However, this will compile: if num0 > 100 || num0 < 900 {printCurr(num0)} It is idiomatic, however, to write the if statement on multiple lines (no matter how simple the statement block may be). This encourages good style and clarity. The following snippet will compile with no issues: if num0 > 100 || num0 < 900 {printCurr(num0)} However, the preferred idiomatic layout for the statement is to use multiple lines as follows: if num0 > 100 || num0 < 900 { printCurr(num0) } The if statement may include an optional else block which is executed when the expression in the if block evaluates to false. The code in the else block must be wrapped in curly braces using multiple lines as shown in the following. if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } The else keyword may be immediately followed by another if statement forming an if-else-if chain as used in function printCurr() from the source code listed earlier. if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) The if-else-if statement chain can grow as long as needed and may be terminated by an optional else statement to express all other untested conditions. Again, this is done in the printCurr() function which tests four conditions using the if-else-if blocks. Lastly, it includes an else statement block to catch any other untested conditions: func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } In Go, however, the idiomatic and cleaner way to write such a deep if-else-if code block is to use an expressionless switch statement. This is covered later in the section on SwitchStatement. If Statement Initialization The if statement supports a composite syntax where the tested expression is preceded by an initialization statement. At runtime, the initialization is executed before the test expression is evaluated as illustrated in this code snippet (from the program listed earlier). if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } The initialization statement follows normal variable declaration and initialization rules. The scope of the initialized variables is bound to the if statement block beyond which they become unreachable. This is a commonly used idiom in Go and is supported in other flow control constructs covered in this article. Switch Statements Go also supports a switch statement similarly to that found in other languages such as C or Java. The switch statement in Go achieves multi-way branching by evaluating values or expressions from case clauses as shown in the following abbreviated source code: import "fmt" type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, Curr{"EUR", "Euro", "Greece", 978}, Curr{"HTG", "Gourde", "Haiti", 332}, ... } func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } func isDollar2(curr Curr) bool { dollars := []Curr{currencies[2], currencies[6], currencies[9]} switch curr { default: return false case dollars[0]: fallthrough case dollars[1]: fallthrough case dollars[2]: return true } return false } func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } func main() { curr := Curr{"EUR", "Euro", "Italy", 978} if isDollar(curr) { fmt.Printf("%+v is Dollar currencyn", curr) } else if isEuro(curr) { fmt.Printf("%+v is Euro currencyn", curr) } else { fmt.Println("Currency is not Dollar or Euro") } dol := Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344} if isDollar2(dol) { fmt.Println("Dollar currency found:", dol) } } The switch statement in Go has some interesting properties and rules that make it easy to use and reason about. Semantically, Go's switch-statement can be used in two contexts: An expression-switch statement A type-switch statement The break statement can be used to escape out of a switch code block early The switch statement can include a default case when no other case expressions evaluate to a match. There can only be one default case and it may be placed anywhere within the switch block. Using Expression Switches Expression switches are flexible and can be used in many contexts where control flow of a program needs to follow multiple path. An expression switch supports many attributes as outlined in the following bullets. Expression switches can test values of any types. For instance, the following code snippet (from the previous program listing) tests values of struct type Curr. func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } The expressions in case clauses are evaluated from left to right, top to bottom, until a value (or expression) is found that is equal to that of the switch expression. Upon encountering the first case that matches the switch expression, the program will execute the statements for the case block and then immediately exist the switch block. Unlike other languages, the Go case statement does not need to use a break to avoid falling through the next case. For instance, calling isDollar(Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}) will match the second case statement in the function above. The code will set result to true and exist the switch code block immediately. Case clauses can have multiple values (or expressions) separated by commas with logical OR operator implied between them. For instance, in the following snippet, the switch expression curr is tested against values currencies[2], currencies[4], or currencies[10] using one case clause until a match is found. func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } The switch statement is the cleaner and preferred idiomatic approach to writing complex conditional statements in Go. This is evident when the snippet above is compared to the following which does the same comparison using if statements. func isEuro(curr Curr) bool { if curr == currencies[2] || curr == currencies[4], curr == currencies[10]{ return true }else{ return false } } Fallthrough Cases There is no automatic fall through in Go's case clause as it does in the C or Java switch statements. Recall that a switch block that will exit after executing its first matching case. The code must explicitly place the fallthrough keyword, as the last statement in a case block, to force the execution flow to fall through the successive case block. The following code snippet shows a switch statement with a fallthrough in each case block. func isDollar2(curr Curr) bool { switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}: fallthrough case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: fallthrough case Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } } When a case is matched, the fallthrough statements cascade down to the first statement of the successive case block. So if curr = Curr{"AUD", "Australian Dollar", "Australia", 36}, the first case will be matched. Then the flow cascades down to the first statement of the second case block which is also a fallthrough statement. This causes the first statement, return true, of the third case block to execute. This is functionally equivalent to following snippet. switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}, Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } Expressionless Switches Go supports a form of the switch statement that does not specify an expression. In this format, each case expression must evaluate to a Boolean value true. The following abbreviated source code illustrates the uses of an expressionless switch statement as listed in function find(). The function loops through the slice of Curr values to search for a match based on field values in the struct passed in: import ( "fmt" "strings" ) type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, ... } func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } Notice in the previous example, the switch statement in function find() does not include an expression. Each case expression is separated by a comma and must be evaluated to a Boolean value with an implied OR operator between each case. The previous switch statement is equivalent to the following use of if statement to achieve the same logic. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] if strings.Contains(c.Currency, name) || strings.Contains(c.Name, name) || strings.Contains(c.Country, name){ fmt.Println("Found", c) } } } Switch Initializer The switch keyword may be immediately followed by a simple initialization statement where variables, local to the switch code block, may be declared and initialized. This convenient syntax uses a semicolon between the initializer statement and the switch expression to declare variables which may appear anywhere in the switch code block. The following code sample shows how this is done by initializing two variables name and curr as part of the switch declaration. func assertEuro(c Curr) bool { switch name, curr := "Euro", "EUR"; { case c.Name == name: return true case c.Currency == curr: return true } return false } The previous code snippet uses an expressionless switch statement with an initializer. Notice the trailing semicolon to indicate the separation between the initialization statement and the expression area for the switch. In the example, however, the switch expression is empty. Type Switches Given Go's strong type support, it should be of little surprise that the language supports the ability to query type information. The type switch is a statement that uses the Go interface type to compare underlying type information of values (or expressions). A full discussion on interface types and type assertion is beyond the scope of this section. For now all you need to know is that Go offers the type interface{}, or empty interface, as a super type that is implemented by all other types in the type system. When a value is assigned type interface{}, it can be queried using the type switch as, shown in function findAny() in following code snippet, to query information about its underlying type. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } func findNumber(num int) { for _, curr := range currencies { if curr.Number == num { fmt.Println("Found", curr) } } } func findAny(val interface{}) { switch i := val.(type) { case int: findNumber(i) case string: find(i) default: fmt.Printf("Unable to search with type %Tn", val) } } func main() { findAny("Peso") findAny(404) findAny(978) findAny(false) } The function findAny() takes an interface{} as its parameter. The type switch is used to determine the underlying type and value of the variable val using the type assertion expression: switch i := val.(type) Notice the use of the keyword type in the type assertion expression. Each case clause will be tested against the type information queried from val.(type). Variable i will be assigned the actual value of the underlying type and is used to invoke a function with the respective value. The default block is invoked to guard against any unexpected type assigned to the parameter val parameter. Function findAny may then be invoked with values of diverse types, as shown in the following code snippet. findAny("Peso") findAny(404) findAny(978) findAny(false) Summary This article gave a walkthrough of the mechanism of control flow in Go including if, switch statements. While Go’s flow control constructs appear simple and easy to use, they are powerful and implement all branching primitives expected for a modern language. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Boost.Asio C++ Network Programming [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 1572

article-image-introduction-soa-testing
Packt
09 Aug 2016
13 min read
Save for later

Introduction to SOA Testing

Packt
09 Aug 2016
13 min read
In this article by Pranai Nandan, the author of Mastering SoapUI, we will see how the increase in implementation of service-oriented architecture (SOA) and architecture across applications leads to various technological and business advantages to the organizations implementing it. But as it's said; There are two sides to every coin, with SOA architecture came advantages like: Reusability Better scalability Platform independency Business agility Enhanced security But there are also disadvantages like: Increased response time Service management effort is high Implementation cost is high (For more resources related to this topic, see here.) In this article we will study the following topics: Introduction to SOA SoapUI architecture Test levels in SOA testing SOA testing approach Introduction to functional, performance & security testing using SoapUI Is SOA really advantageous? Well, let's talk about a few of the advantages of SOA architecture. Reusability: If we want to reuse the same piece of functionality exposed via a web service we should be absolutely sure that the functionality of the service is working as expected; security of the service is reliable and has no performance bottlenecks. Business Agility: With more functional changes being easily adopted in a web service, we make the web service prone to functional Bugs. Enhanced Security: Web services are usually wrapped around systems that are being protected by several layers of security like SSL and usage of Security tokens. Use of the business layer to protect the technical services to be directly exposed is usually handled by these layers. If the security of these layers is removed, the web service is highly vulnerable. Also the use of XML as a communication protocol opens the service to XML based attacks. So to mitigate risks we have SOA Testing, and to help you test SOA architecture we have multiple testing tools on the market for example; SOAP UI, SoapUI Pro, HP Service Test, ITKO LISA and SOA Parasoft. But the most widely used and open source tool in the SOA testing arena is SOAP UI. Following is a comparative analysis of the most famous tools in the Web service testing & test automation arena. Comparative Analysis: S.No Factors SoapUI SaopUI PRO ITKO LISA SOA Parasoft 1 Cost Open source 400 $/License Highly Costly Highly Costly 2 Multilayer testing Yes Yes Yes Yes 3 Scripting support Yes Yes Yes Yes 4 Protocol support Yes Yes Yes Yes 5 CI support Yes Yes Yes Yes 6 Ease Of Use 8/10 9/10 9/10 9/10 7 Learning curve 8/10 8/10 6/10 6/10 As we can see by the preceding comparison metrics, Ease of Use, Learning curve, and Cost play a major role in selection of a tool for any project. So to learn ITKO LISA or SOA Parasoft there is very limited, or no, material available on the Internet. To get resources trained you need to go to the owners of these tools and pay extra and then pay more if you need the training a second time. This gives additional advantages to SaopUI and SoapUI Pro to be the first choice for Test Architects and Test Managers for their projects. Now let's talk about the closely related brothers in this subset; SoapUI & SoapUI pro both are from the same family, Eviware, which is now SmartBear. However, SoapUI Pro has an enriched functionality and GUI which have additional functionalities to help reduce the time for testing, justifying its cost as compared to SoapUI open source. Following is a quick comparison Criteria SoapUI SoapUI Pro Reporting Very limited, no rich reporting Reports are available in different formats XPath Builder Not Available Available Data source Not Available Multiple options for data sources available Data sink Not Available Available XQuery Builder Not Available Available The additional functionality that is available in SoapUI pro can be achieved by SoapUI using Groovy script. To sum up everything that is given as UI functionality in SoapUI PRO is achievable with little effort in SoapUI which finally makes SoapUI open source the preferred choice for tool pickers. SoapUI architecture Before we move onto the architecture let's take a look the capabilities of SOAP UI and how can we use it for the benefit of our projects. SoapUI provides the following testing features to the test team: Functional testing [manual] Function test automation Performance testing Security testing Web service interoperability testing Apart from these, SOAP UI is also capable of integration with the following: LoadUI for advanced performance testing Selenium for multilayer testing Jenkins for continuous integration. HP QC for end-to-end test Automation management and execution. Soap UI has a comparatively simple architecture as compared to other tools in the SOA Testing world. The following image, shows the architecture of SoapUI at an overview level: Let's talk about the architecture in detail: Test config files: Files which require power to test this includes test data, expected results, database connections variables and any other environmental or test specific details. 3rd party API: Third-party API helps create an optimized test automation framework example. JExcel API to help integrate with Microsoft Excel to create a data driven framework. Selenium:Selenium JARs to be used for UI Automation. SOAP UI Runner: This is used to run the soap UI project and is a very useful utility for test automation as it allows you to run the test from the command line and acts as a trigger for test automation. Groovy: This library is used to enable SoapUI to provide its users with groovy as a scripting language. Properties: Test request properties to hold any dynamically generated data. We also have Test properties to configure SSL and other security configurations for test requests. Test Report: SoapUI provides a Junit style report and user Jasper reporting utility for reporting of test results. Test architecture in detail Soap UI Architecture is composed of many key components which help provide the users of SOAP UI with advanced functionality like virtualization, XPath, invoking services with JMS endpoints, logging, and debugging. Let's discuss these key components in detail: Jetty: Service virtualization / mock Service We can create replicas of services in cases where the service is not ready or buggy to test. In the meantime, we want to create our test cases, for that we can use service virtualization or mocking and use that service. Jetty is used for hosting virtual services. Provided by Eclipse, Java based web server. Works for both Soap and Rest. Jasper: Is used to generate reports Open source reporting tool Saxon XSLT and XQuery processor: We can use Path and XQuery to process service results The Saxon platform provides us with the option to process results using Path and XQuery Log4J: Used for logging Provides SoapUI, error, HTTP, Jetty, and memory logs JDBC driver: To interact with different databases we would need the respective drivers Hermes MS: Is used in applications where high volume of transactions take place It is used to send messages to the JMS Queue Receiver results from the JMS Queue We can incorporate Java JMS using Hermes JMS Scripting Language: We can choose with groovy or JavaScript We can select language for each project We can set language at project property level Monitoring To check what is sent to the service and what is received from the service Runners Execution can be run without using SoapUI Run from the command line Execution can be run without using SoapUI Test runner LoadTestRunner SecurityTestRunner MockServiceRunner Can also be executed from build tools like Jenkins Test approaches in SOA testing Approaches to test SOA architecture are usually based on the scope of the project and requirements to test. Let's look at an example: Following is a diagram of a three-tier architecture based on SOA architecture: Validation1 or V1: Validation of integration between Presentation Layer to the Services Layer Validation2 or V2: Validation of integration between Services Layer to the Service Catalogue Layer Validation3 or V3: Validation of integration between Product catalogue layer and the database or backend Layer So we have three integration points which makes us understand that we need integration testing also with functional, performance and security testing. So let's sum up the types of testing that are required to test end-to-end Greenfield projects. Functional testing Integration testing Performance testing Security testing Automation testing Functional testing A web service may expose single or multiple functionalities via operations and sometimes we need to test a business flow which requires calling multiple services in sequence which is known as orchestration testing in which we validate that a particular business flow meets the requirement. Let's see how to configure a SOAP service in SoapUI for functional Testing Open SoapUI by clicking on the launch icon. Click on File in upper-left corner of the top navigation bar. Click on New SOAP Project heading in the File menu. Verify that a popup opens up which asks for the WSDL or WADL details. There are two ways you can pass a URL to the web location of the WSDL, or you can pass a link to the downloaded WSDL on your local system. Enter the project name details and the WSDL location which can either be on your local machine or be called from a URL, then click on OK. You may verify that the WSDL is successfully loaded in SOAP UI with all the operations. Now you can see that service is successfully loaded in the workspace of SoapUI. Now, the first step toward an organized test suite is to create a test suite and relevant test cases. To achieve this, click on the operation request: When you click on Add to TestCase you are asked for the test suite name and then a test case name and finally you will be presented with the following popup: Here you can create a TestCase and add validations to it at run time. After clicking OK you are ready to start your functional and integration testing: Let's take an example of how to test a simple web service functionally. Test case: Validate that Search Customer searches for the customer from the system database using an MSISDN (Telecom Service). Please note MSISDN is a unique identifier for a user to be searched in the database and is a mandatory parameter. API to be tested, Search Customer: Request body: <v11:SearchCustomerRequest> <v11:username>TEST_Agent1</v11:username> <v11:orgID>COM01</v11:orgID> <v11:MSISDN>447830735969</v11:MSISDN> So to test it we pass the mandatory parameters and verify the response which should get us the response parameters expected to be fetched. By this we validate that searching for the customer using some Search criteria is successful or not, similarly, in order to test this service from a business point of view we need to validate this service with multiple scenarios. Following is a list of a few of them. Considering it's a telecom application search customer service: Verify that a prepay customer is successfully searched for using Search customer Verify that a post-pay customer is successfully searched for using Search customer Verify that agents are successfully searched for using search customer Verify that the results retrieved in response have the right data Verify that all the mandatory parameters are presenting the response of the service Here is how the response looks: Response Search Customer <TBD> Performance testing So is it really possible to perform performance testing in SoapUI? The answer is yes, if you just want to do a very simple test on your service itself, not on the orchestration. Soap UI does have limitations when it comes to performance testing but it does provide you a functionality to generate load on your web service with different strategies. So to start with, once you have created your SoapUI project for a service operation, you can just convert the same to a simple load test. Here is how: Right-click on the Load Test option available: Now select the name of the load test; a more relevant one will help you in future runs. You will now see that the load test popup appears and the load test is created: There are several strategies to generate load in SoapUI. The strategies are given below:    Simple    Burst    Thread    Variance Security testing API and web services are highly vulnerable to security attacks and we need to be absolutely sure about the security of the exposed web service depending on the architecture of the web service and the nature of its use. Some of the common attacks types include: Boundary attack Cross-site scripting XPath injection SQL injection Malformed XML XML bomb Malicious attachment Soap UI security Testing functionality provides scans for every attack type and also, if you want to try a custom attack on the service by writing a custom Script. So the scans provided by SOAP UI are: Boundary scan Cross-site scripting scan XPath injection scan SQL injection scan Malformed XML scan XML bomb scan Malicious attachment scan Fuzzing scan Custom script Following are the steps for how we configure a security test in SoapUI: You can see an option for security test just below load test in SoapUI. To add a test, right-click on the Security Test and select New Security Test: Now select New Security Test and verify that a popup asking the name of the security test opens: Select the name of the security test and click on OK. After that, you should see the security test configuration window opened on the screen. For the Service operation of your test case, in case of multiple operation in the same test case, you can configure for multiple operations in a single security test as well. For this pane you can select and configure scans on your service operations. To add a scan, click on the selected icon in the following screenshot: After selecting the icon, you can now select the scan you want to generate on your operation: After that you can configure your scan for the relevant parameter by configuring the XPath of the parameter in the request. After that you can select Assertions and Strategy tabs from the below options: You are now ready to run you security test with Boundary Scan: Summary So now we have been introduced to the key features of SoapUI and by the end of this article the readers of this article will now be familiar with SOA and SOA Testing. They now will have basic understanding of functional, load, and security testing in SOA using SoapUI. Resources for Article: Further resources on this subject: Methodology for Modeling Business Processes in SOA [article] Additional SOA Patterns – Supporting Composition Controllers [article] Web Services Testing and soapUI [article]
Read more
  • 0
  • 0
  • 2936
Banner background image

article-image-developing-middleware
Packt
08 Aug 2016
16 min read
Save for later

Developing Middleware

Packt
08 Aug 2016
16 min read
In this article by Doug Bierer, author of the book PHP 7 Programming Cookbook, we will cover the following topics: (For more resources related to this topic, see here.) Authenticating with middleware Making inter-framework system calls Using middleware to cross languages Introduction As often happens in the IT industry, terms get invented, and then used and abused. The term middleware is no exception. Arguably the first use of the term came out of the Internet Engineering Task Force (IETF) in the year 2000. Originally, the term was applied to any software which operates between the transport (that is, TCP/IP) and the application layer. More recently, especially with the acceptance of PHP Standard Recommendation number 7 (PSR-7), middleware, specifically in the PHP world, has been applied to the web client-server environment. Authenticating with middleware One very important usage of middleware is to provide authentication. Most web-based applications need the ability to verify a visitor via username and password. By incorporating PSR-7 standards into an authentication class, you will make it generically useful across the board, so to speak, being secure that it can be used in any framework that provides PSR-7-compliant request and response objects. How to do it… We begin by defining a ApplicationAclAuthenticateInterface class. We use this interface to support the Adapter software design pattern, making our Authenticate class more generically useful by allowing a variety of adapters, each of which can draw authentication from a different source (for example, from a file, using OAuth2, and so on). Note the use of the PHP 7 ability to define the return value data type: namespace ApplicationAcl; use PsrHttpMessage { RequestInterface, ResponseInterface }; interface AuthenticateInterface { public function login(RequestInterface $request) : ResponseInterface; } Note that by defining a method that requires a PSR-7-compliant request, and produces a PSR-7-compliant response, we have made this interface universally applicable. Next, we define the adapter that implements the login() method required by the interface. We make sure to use the appropriate classes, and define fitting constants and properties. The constructor makes use of ApplicationDatabaseConnection: namespace ApplicationAcl; use PDO; use ApplicationDatabaseConnection; use PsrHttpMessage { RequestInterface, ResponseInterface }; use ApplicationMiddleWare { Response, TextStream }; class DbTable implements AuthenticateInterface { const ERROR_AUTH = 'ERROR: authentication error'; protected $conn; protected $table; public function __construct(Connection $conn, $tableName) { $this->conn = $conn; $this->table = $tableName; } The core login() method extracts the username and password from the request object. We then do a straightforward database lookup. If there is a match, we store user information in the response body, JSON-encoded: public function login(RequestInterface $request) : ResponseInterface { $code = 401; $info = FALSE; $body = new TextStream(self::ERROR_AUTH); $params = json_decode($request->getBody()->getContents()); $response = new Response(); $username = $params->username ?? FALSE; if ($username) { $sql = 'SELECT * FROM ' . $this->table . ' WHERE email = ?'; $stmt = $this->conn->pdo->prepare($sql); $stmt->execute([$username]); $row = $stmt->fetch(PDO::FETCH_ASSOC); if ($row) { if (password_verify($params->password, $row['password'])) { unset($row['password']); $body = new TextStream(json_encode($row)); $response->withBody($body); $code = 202; $info = $row; } } } return $response->withBody($body)->withStatus($code); } } Best practice Never store passwords in clear text. When you need to do a password match, use password_verify(), which negates the need to reproduce the password hash. The Authenticate class is a wrapper for an adapter class that implements AuthenticationInterface. Accordingly, the constructor takes an adapter class as an argument, as well as a string that serves as the key in which authentication information is stored in $_SESSION: namespace ApplicationAcl; use ApplicationMiddleWare { Response, TextStream }; use PsrHttpMessage { RequestInterface, ResponseInterface }; class Authenticate { const ERROR_AUTH = 'ERROR: invalid token'; const DEFAULT_KEY = 'auth'; protected $adapter; protected $token; public function __construct( AuthenticateInterface $adapter, $key) { $this->key = $key; $this->adapter = $adapter; } In addition, we provide a login form with a security token, which helps prevent Cross Site Request Forgery (CSRF) attacks: public function getToken() { $this->token = bin2hex(random_bytes(16)); $_SESSION['token'] = $this->token; return $this->token; } public function matchToken($token) { $sessToken = $_SESSION['token'] ?? date('Ymd'); return ($token == $sessToken); } public function getLoginForm($action = NULL) { $action = ($action) ? 'action="' . $action . '" ' : ''; $output = '<form method="post" ' . $action . '>'; $output .= '<table><tr><th>Username</th><td>'; $output .= '<input type="text" name="username" /></td>'; $output .= '</tr><tr><th>Password</th><td>'; $output .= '<input type="password" name="password" />'; $output .= '</td></tr><tr><th>&nbsp;</th>'; $output .= '<td><input type="submit" /></td>'; $output .= '</tr></table>'; $output .= '<input type="hidden" name="token" value="'; $output .= $this->getToken() . '" />'; $output .= '</form>'; return $output; } Finally, the login() method in this class checks whether the token is valid. If not, a 400 response is returned. Otherwise, the login() method of the adapter is called: public function login( RequestInterface $request) : ResponseInterface { $params = json_decode($request->getBody()->getContents()); $token = $params->token ?? FALSE; if (!($token && $this->matchToken($token))) { $code = 400; $body = new TextStream(self::ERROR_AUTH); $response = new Response($code, $body); } else { $response = $this->adapter->login($request); } if ($response->getStatusCode() >= 200 && $response->getStatusCode() < 300) { $_SESSION[$this->key] = json_decode($response->getBody()->getContents()); } else { $_SESSION[$this->key] = NULL; } return $response; } } How it works… Go ahead and define the classes presented in this recipe, summarized in the following table: Class Discussed in these steps ApplicationAclAuthenticateInterface 1 ApplicationAclDbTable 2 - 3 ApplicationAclAuthenticate 4 - 6 You can then define a chap_09_middleware_authenticate.php calling program that sets up autoloading and uses the appropriate classes: <?php session_start(); define('DB_CONFIG_FILE', __DIR__ . '/../config/db.config.php'); define('DB_TABLE', 'customer_09'); define('SESSION_KEY', 'auth'); require __DIR__ . '/../Application/Autoload/Loader.php'; ApplicationAutoloadLoader::init(__DIR__ . '/..'); use ApplicationDatabaseConnection; use ApplicationAcl { DbTable, Authenticate }; use ApplicationMiddleWare { ServerRequest, Request, Constants, TextStream }; You are now in a position to set up the authentication adapter and core class: $conn = new Connection(include DB_CONFIG_FILE); $dbAuth = new DbTable($conn, DB_TABLE); $auth = new Authenticate($dbAuth, SESSION_KEY); Be sure to initialize the incoming request, and set up the request to be made to the authentication class: $incoming = new ServerRequest(); $incoming->initialize(); $outbound = new Request(); Check the incoming class method to see if it is POST. If so, pass a request to the authentication class: if ($incoming->getMethod() == Constants::METHOD_POST) { $body = new TextStream(json_encode( $incoming->getParsedBody())); $response = $auth->login($outbound->withBody($body)); } $action = $incoming->getServerParams()['PHP_SELF']; ?> The display logic looks like this: <?= $auth->getLoginForm($action) ?> Here is the output from an invalid authentication attempt. Notice the 401 status code on the right. In this illustration, you could add a var_dump() of the response object. Here is a successful authentication: Making inter-framework system calls One of the primary reasons for the development of PSR-7 (and middleware) was a growing need to make calls between frameworks. It is of interest to note that the main documentation for PSR-7 is hosted by PHP Framework Interop Group (PHP-FIG). How to do it… The primary mechanism used in middleware inter-framework calls is to create a driver program that executes framework calls in succession, maintaining a common request and response object. The request and response objects are expected to represent PsrHttpMessageServerRequestInterface and PsrHttpMessageResponseInterface respectively. For the purposes of this illustration, we define a middleware session validator. The constants and properties reflect the session thumbprint, which is a term we use to incorporate factors such as the website visitor's IP address, browser, and language settings: namespace ApplicationMiddleWareSession; use InvalidArgumentException; use PsrHttpMessage { ServerRequestInterface, ResponseInterface }; use ApplicationMiddleWare { Constants, Response, TextStream }; class Validator { const KEY_TEXT = 'text'; const KEY_SESSION = 'thumbprint'; const KEY_STATUS_CODE = 'code'; const KEY_STATUS_REASON = 'reason'; const KEY_STOP_TIME = 'stop_time'; const ERROR_TIME = 'ERROR: session has exceeded stop time'; const ERROR_SESSION = 'ERROR: thumbprint does not match'; const SUCCESS_SESSION = 'SUCCESS: session validates OK'; protected $sessionKey; protected $currentPrint; protected $storedPrint; protected $currentTime; protected $storedTime; The constructor takes a ServerRequestInterface instance and the session as arguments. If the session is an array (such as $_SESSION), we wrap it in a class. The reason why we do this is in case we are passed a session object, such as JSession used in Joomla. We then create the thumbprint using the factors previously mentioned factors. If the stored thumbprint is not available, we assume this is the first time, and store the current print as well as stop time, if this parameter is set. We used md5() because it's a fast hash, and is not exposed externally and is therefore useful to this application: public function __construct( ServerRequestInterface $request, $stopTime = NULL) { $this->currentTime = time(); $this->storedTime = $_SESSION[self::KEY_STOP_TIME] ?? 0; $this->currentPrint = md5($request->getServerParams()['REMOTE_ADDR'] . $request->getServerParams()['HTTP_USER_AGENT'] . $request->getServerParams()['HTTP_ACCEPT_LANGUAGE']); $this->storedPrint = $_SESSION[self::KEY_SESSION] ?? NULL; if (empty($this->storedPrint)) { $this->storedPrint = $this->currentPrint; $_SESSION[self::KEY_SESSION] = $this->storedPrint; if ($stopTime) { $this->storedTime = $stopTime; $_SESSION[self::KEY_STOP_TIME] = $stopTime; } } } It's not required to define __invoke(), but this magic method is quite convenient for standalone middleware classes. As is the convention, we accept ServerRequestInterface and ResponseInterface instances as arguments. In this method we simply check to see if the current thumbprint matches the one stored. The first time, of course, they will match. But on subsequent requests, the chances are an attacker intent on session hijacking will be caught out. In addition, if the session time exceeds the stop time (if set), likewise, a 401 code will be sent: public function __invoke( ServerRequestInterface $request, Response $response) { $code = 401; // unauthorized if ($this->currentPrint != $this->storedPrint) { $text[self::KEY_TEXT] = self::ERROR_SESSION; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[401]; } elseif ($this->storedTime) { if ($this->currentTime > $this->storedTime) { $text[self::KEY_TEXT] = self::ERROR_TIME; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[401]; } else { $code = 200; // success } } if ($code == 200) { $text[self::KEY_TEXT] = self::SUCCESS_SESSION; $text[self::KEY_STATUS_REASON] = Constants::STATUS_CODES[200]; } $text[self::KEY_STATUS_CODE] = $code; $body = new TextStream(json_encode($text)); return $response->withStatus($code)->withBody($body); } We can now put our new middleware class to use. The main problems with inter-framework calls, at least at this point, are summarized here. Accordingly, how we implement middleware depends heavily on the last point: Not all PHP frameworks are PSR-7-compliant Existing PSR-7 implementations are not complete All frameworks want to be the "boss" As an example, have a look at the configuration files for Zend Expressive, which is a self-proclaimed PSR7 Middleware Microframework. Here is the file, middleware-pipeline.global.php, which is located in the config/autoload folder in a standard Expressive application. The dependencies key is used to identify middleware wrapper classes that will be activated in the pipeline: <?php use ZendExpressiveContainerApplicationFactory; use ZendExpressiveHelper; return [ 'dependencies' => [ 'factories' => [ HelperServerUrlMiddleware::class => HelperServerUrlMiddlewareFactory::class, HelperUrlHelperMiddleware::class => HelperUrlHelperMiddlewareFactory::class, // insert your own class here ], ], Under the middleware_pipline key you can identify classes that will be executed before or after the routing process occurs. Optional parameters include path, error, and priority: 'middleware_pipeline' => [ 'always' => [ 'middleware' => [ HelperServerUrlMiddleware::class, ], 'priority' => 10000, ], 'routing' => [ 'middleware' => [ ApplicationFactory::ROUTING_MIDDLEWARE, HelperUrlHelperMiddleware::class, // insert reference to middleware here ApplicationFactory::DISPATCH_MIDDLEWARE, ], 'priority' => 1, ], 'error' => [ 'middleware' => [ // Add error middleware here. ], 'error' => true, 'priority' => -10000, ], ], ]; Another technique is to modify the source code of an existing framework module, and make a request to a PSR-7-compliant middleware application. Here is an example modifying a Joomla! installation to include a middleware session validator. Next, add this code the end of the index.php file in the /path/to/joomla folder. Since Joomla! uses Composer, we can leverage the Composer autoloader: session_start(); // to support use of $_SESSION $loader = include __DIR__ . '/libraries/vendor/autoload.php'; $loader->add('Application', __DIR__ . '/libraries/vendor'); $loader->add('Psr', __DIR__ . '/libraries/vendor'); We can then create an instance of our middleware session validator, and make a validation request just before $app = JFactory::getApplication('site');: $session = JFactory::getSession(); $request = (new ApplicationMiddleWareServerRequest())->initialize(); $response = new ApplicationMiddleWareResponse(); $validator = new ApplicationSecuritySessionValidator( $request, $session); $response = $validator($request, $response); if ($response->getStatusCode() != 200) { // take some action } How it works… First, create the ApplicationMiddleWareSessionValidator test middleware class described in steps 2 - 5. Then you will need to go to getcomposer.org and follow the directions to obtain Composer. Next, build a basic Zend Expressive application, as shown next. Be sure to select No when prompted for minimal skeleton: cd /path/to/source/for/this/chapter php composer.phar create-project zendframework/zend-expressive-skeleton expressive This will create a folder /path/to/source/for/this/chapter/expressive. Change to this directory. Modify public/index.php as follows: <?php if (php_sapi_name() === 'cli-server' && is_file(__DIR__ . parse_url( $_SERVER['REQUEST_URI'], PHP_URL_PATH)) ) { return false; } chdir(dirname(__DIR__)); session_start(); $_SESSION['time'] = time(); $appDir = realpath(__DIR__ . '/../../..'); $loader = require 'vendor/autoload.php'; $loader->add('Application', $appDir); $container = require 'config/container.php'; $app = $container->get(ZendExpressiveApplication::class); $app->run(); You will then need to create a wrapper class that invokes our session validator middleware. Create a SessionValidateAction.php file that needs to go in the /path/to/source/for/this/chapter/expressive/src/App/Action folder. For the purposes of this illustration, set the stop time parameter to a short duration. In this case, time() + 10 gives you 10 seconds: namespace AppAction; use ApplicationMiddleWareSessionValidator; use ZendDiactoros { Request, Response }; use PsrHttpMessageResponseInterface; use PsrHttpMessageServerRequestInterface; class SessionValidateAction { public function __invoke(ServerRequestInterface $request, ResponseInterface $response, callable $next = null) { $inbound = new Response(); $validator = new Validator($request, time()+10); $inbound = $validator($request, $response); if ($inbound->getStatusCode() != 200) { session_destroy(); setcookie('PHPSESSID', 0, time()-300); $params = json_decode( $inbound->getBody()->getContents(), TRUE); echo '<h1>',$params[Validator::KEY_TEXT],'</h1>'; echo '<pre>',var_dump($inbound),'</pre>'; exit; } return $next($request,$response); } } You will now need to add the new class to the middleware pipeline. Modify config/autoload/middleware-pipeline.global.php as follows. Modifications are shown in bold: <?php use ZendExpressiveContainerApplicationFactory; use ZendExpressiveHelper; return [ 'dependencies' => [ 'invokables' => [ AppActionSessionValidateAction::class => AppActionSessionValidateAction::class, ], 'factories' => [ HelperServerUrlMiddleware::class => HelperServerUrlMiddlewareFactory::class, HelperUrlHelperMiddleware::class => HelperUrlHelperMiddlewareFactory::class, ], ], 'middleware_pipeline' => [ 'always' => [ 'middleware' => [ HelperServerUrlMiddleware::class, ], 'priority' => 10000, ], 'routing' => [ 'middleware' => [ ApplicationFactory::ROUTING_MIDDLEWARE, HelperUrlHelperMiddleware::class, AppActionSessionValidateAction::class, ApplicationFactory::DISPATCH_MIDDLEWARE, ], 'priority' => 1, ], 'error' => [ 'middleware' => [ // Add error middleware here. ], 'error' => true, 'priority' => -10000, ], ], ]; You might also consider modifying the home page template to show the status of $_SESSION. The file in question is /path/to/source/for/this/chapter/expressive/templates/app/home-page.phtml. Simply adding var_dump($_SESSION) should suffice. Initially, you should see something like this: After 10 seconds, refresh the browser. You should now see this: Using middleware to cross languages Except in cases where you are trying to communicate between different versions of PHP, PSR-7 middleware will be of minimal use. Recall what the acronym stands for: PHP Standards Recommendations. Accordingly, if you need to make a request to an application written in another language, treat it as you would any other web service HTTP request. How to do it… In the case of PHP 4, you actually have a chance in that there was limited support for object-oriented programming. There is not enough space to cover all changes, but we present a potential PHP 4 version of ApplicationMiddleWareServerRequest. The first thing to note is that there are no namespaces! Accordingly, we use a classname with underscores, _, in place of namespace separators: class Application_MiddleWare_ServerRequest extends Application_MiddleWare_Request implements Psr_Http_Message_ServerRequestInterface { All properties are identified in PHP 4 using the key word var: var $serverParams; var $cookies; var $queryParams; // not all properties are shown The initialize() method is almost the same, except that syntax such as $this->getServerParams()['REQUEST_URI'] was not allowed in PHP 4. Accordingly, we need to split this out into a separate variable: function initialize() { $params = $this->getServerParams(); $this->getCookieParams(); $this->getQueryParams(); $this->getUploadedFiles; $this->getRequestMethod(); $this->getContentType(); $this->getParsedBody(); return $this->withRequestTarget($params['REQUEST_URI']); } All of the $_XXX super-globals were present in later versions of PHP 4: function getServerParams() { if (!$this->serverParams) { $this->serverParams = $_SERVER; } return $this->serverParams; } // not all getXXX() methods are shown to conserve space The null coalesce" operator was only introduced in PHP 7. We need to use isset(XXX) ? XXX : ''; instead: function getRequestMethod() { $params = $this->getServerParams(); $method = isset($params['REQUEST_METHOD']) ? $params['REQUEST_METHOD'] : ''; $this->method = strtolower($method); return $this->method; } The JSON extension was not introduced until PHP 5. Accordingly, we need to be satisfied with raw input. We could also possibly use serialize() or unserialize() in place of json_encode() and json_decode(): function getParsedBody() { if (!$this->parsedBody) { if (($this->getContentType() == Constants::CONTENT_TYPE_FORM_ENCODED || $this->getContentType() == Constants::CONTENT_TYPE_MULTI_FORM) && $this->getRequestMethod() == Constants::METHOD_POST) { $this->parsedBody = $_POST; } elseif ($this->getContentType() == Constants::CONTENT_TYPE_JSON || $this->getContentType() == Constants::CONTENT_TYPE_HAL_JSON) { ini_set("allow_url_fopen", true); $this->parsedBody = file_get_contents('php://stdin'); } elseif (!empty($_REQUEST)) { $this->parsedBody = $_REQUEST; } else { ini_set("allow_url_fopen", true); $this->parsedBody = file_get_contents('php://stdin'); } } return $this->parsedBody; } The withXXX() methods work pretty much the same in PHP 4: function withParsedBody($data) { $this->parsedBody = $data; return $this; } Likewise, the withoutXXX() methods work the same as well: function withoutAttribute($name) { if (isset($this->attributes[$name])) { unset($this->attributes[$name]); } return $this; } } For websites using other languages, we could use the PSR-7 classes to formulate requests and responses, but would then need to use an HTTP client to communicate with the other website. Here is the example: $request = new Request( TARGET_WEBSITE_URL, Constants::METHOD_POST, new TextStream($contents), [Constants::HEADER_CONTENT_TYPE => Constants::CONTENT_TYPE_FORM_ENCODED, Constants::HEADER_CONTENT_LENGTH => $body->getSize()] ); $data = http_build_query(['data' => $request->getBody()->getContents()]); $defaults = array( CURLOPT_URL => $request->getUri()->getUriString(), CURLOPT_POST => true, CURLOPT_POSTFIELDS => $data, ); $ch = curl_init(); curl_setopt_array($ch, $defaults); $response = curl_exec($ch); curl_close($ch); Summary In this article, we learned about providing authentication to a system, to make calls between frameworks, and to make a request to an application written in another language. Resources for Article: Further resources on this subject: Middleware [Article] Building a Web Application with PHP and MariaDB – Introduction to caching [Article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [Article]
Read more
  • 0
  • 0
  • 2699

article-image-tiered-application-architecture-docker-compose-part-3
Darwin Corn
08 Aug 2016
6 min read
Save for later

Tiered Application Architecture with Docker Compose, Part 3

Darwin Corn
08 Aug 2016
6 min read
This is the third part in a series that introduces you to basic web application containerization and deployment principles. If you're new to the topic, I suggest reading Part 1 and Part 2 . In this post, I attempt to take the training wheels off, and focus on using Docker Compose. Speaking of training wheels, I rode my bike with training wheels until I was six or seven. So in the interest of full disclosure, I have to admit that to a certain degree I'm still riding the containerization wave with my training wheels on. That's not to say I’m not fully using container technology. Before transitioning to the cloud, I had a private registry running on a Git server that my build scripts pushed to and pulled from to automate deployments. Now, we deploy and maintain containers in much the same way as I've detailed in the first two Parts in this series, and I take advantage of the built-in registry covered in Part 2 of this series. Either way, our use case multi-tiered application architecture was just overkill. Adding to that, when we were still doing contract work, Docker was just getting 1.6 off the ground. Now that I'm working on a couple of projects where this will be a necessity, I'm thankful that Docker has expanded their offerings to include tools like Compose, Machine and Swarm. This post will provide a brief overview of a multi-tiered application setup with Docker Compose, so look for future posts to deal with the latter two. Of course, you can just hold out for a mature Kitematic and do it all in a GUI, but you probably won't be reading this post if that applies to you. All three of these Docker extensions are relatively new, and so the entirety of this post is subject to a huge disclaimer that even Docker hasn't fully developed these extensions to be production-ready for large or intricate deployments. If you're looking to do that, you're best off holding out for my post on alternative deployment options like CoreOS and Kubernetes. But that's beyond the scope of what we're looking at here, so let's get started. First, you need to install the binary. Since this is part 3, I'm going to assume that you have the Docker Engine already installed somewhere. If you're on Mac or Windows, the Docker Toolbox you used to install it also contained an option to install Compose. I'm going to assume your daily driver is a Linux box, so these instructions are for Linux. Fortunately, the installation should just be a couple of commands--curling it from the web and making it executable: # curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # docker-compose -v That last command should output version info if you've installed it correctly. For some reason, the linked installation doc thinks you can run that chmod as a regular user. I'm not sure of any distro that lets regular users write to /usr/local/bin, so I ran them both as root. Docker has its own security issues that are beyond the scope of this series, but I suggest reading about them if you're using this in production. My lazy way around it is to run every Docker-related command elevated, and I'm sure someone will let me have it for that in the comments. Seems like a better policy than making /usr/local/bin writeable by anyone other than root. Now that you have Compose installed, let's look at how to use it to coordinate and deploy a layered application. I'm abandoning my sample music player of the previous two posts in favor of something that's already separated its functionality, namely the Taiga project. If you're not familiar, it's a slick flat JIRA-killer, and the best part is that it's open source with a thorough installation guide. I've done the heavy lifting, so all you have to do is clone the docker-taiga repo into wherever you keep your source code and get to Composin'. $ git clone https://github.com/ndarwincorn/docker-taiga.git $ cd docker-taiga You'll notice a few things. In the root of the app, there's an .envfile where you can set all the environmental variables in one place. Next, there are two folders with taiga- prefixes. They correspond to the layers of the application, from the Angular frontend to the websocket and backend Django server. Each contains a Dockerfile for building the container, as well as relevant configuration files. There's also a docker-entrypoint-initdb.d folder that contains a shell script that creates the Taiga database when the postgres container is built. Having covered container creation in part 1, I'm more concerned with the YAML file in the root of the application, docker-compose.yml. This file coordinates the container/image creation for the application, and full reference can be found on Docker's website. Long story short, the compose YAML file gives the containers a creation order (databases, backend/websocket, frontend) and links them together, so that ports exposed in each container don't need to be published to the host machine. So, from the root of the application, let's run a # docker-compose up and see what happens. Provided there are no errors, you should be able to navigate to localhost:8080 and see your new Taiga deployment! You should be able to log in with the admin user and password 123123. Of course, there's much more to do--configure automated e-mails, link it to your Github organization, configure TLS. I'll leave that as an exercise for you. For now, enjoy your brand-new layered project management application. Of course, if you're deploying such an application for an organization, you don't want all your eggs in one basket. The next two parts in the series will deal with leveraging Docker tools and alternatives to deploy the application in a clustered, high-availability setup. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 4199

article-image-saying-hello-java-ee
Packt
03 Aug 2016
27 min read
Save for later

Saying Hello to Java EE

Packt
03 Aug 2016
27 min read
To develop a scalable, distributed, well-presented, complex, and multi-layered enterprise application is complicated. The development becomes even worse if the developer is not well aware of the software development fundamentals. Instead of looking at a bigger scenario, if we cut it down into parts and later combine them, it becomes easy for understanding as well as for developing. Each technology has some basics which we cannot overlook. Rather, if we overlook them, it will be the biggest mistake; the same is applicable to Java EE. In this article by TejaswiniMandar Jog, author of the book Learning Modular Java Programming, we are going to explore the following: Java EE technologies Why servlet and JSP? Introduction to Spring MVC Creating a sample application through Spring MVC (For more resources related to this topic, see here.) The enterprise as an application To withstand the high, competitive, and daily increasing requirements, it's becoming more and more difficult nowadays to develop an enterprise application. The difficulty is due to more than one kind of service, requirement of application to be robust and should support concurrency, security, and many more. Along with these things, enterprise applications should provide an easy user interface but good look and feel for different users. In the last article, we discussed enterprise applications. The discussion was more over understanding the terminology or the aspect. Let's now discuss it in terms of development, and what developers look forward to: The very first thing even before starting the development is: what we are we developing and why? Yes, as a developer we need to understand the requirements or the expectations from the application. Developers have to develop an application which will meet the requirements. The application should be efficient and with high quality so as to sustain in the market. The application code should be reliable and bug-free to avoid runtime problems. No application is perfect; it's a continuous process to update it for new demands. Develop an application in such a way that it is easy to update. To meet high expectations, developers write code which becomes complicated to understand as well as to change. Each one of us wants to have a new and different product, different from what is on the market. To achieve this, designers make an over-clumsy design which is not easy to change in the future. Try to avoid over-complexity both in design and business logic. When development starts, developers look forward to providing a solution, but they have to give thought to what they are developing and how the code will be organized in terms of easy maintenance and future extension. Yes, we are thinking about modules which are doing a defined task and those which are less dependent. Try to write a module which will be loosely coupled and highly cohesive. Today we are using enterprise applications through different browsers, such as Internet Explorer, Mozilla, or Firefox. We are even using mobile browsers for the same task. This demands an application that has been developed to withstand the number of platforms and browsers. Going through all this discussion, many technologies come to mind. We will go through one such platform which covers the maximum of the above requirements: the Java Enterprise Edition (Java EE) platform. Let's dive in and explore it!! The Java EE platform Sun Microsystems released the Java EE platform in 2000, which was formerly known as the J2EE specification. It defines the standards for developing component-based enterprise applications easily. The concrete implementation is provided by application servers such as Weblogic and GlassFish, and servlet containers such as Tomcat. Today we have Java EE 8 on the market. Features of the Java EE platform The following are the various features of the Java EE platform: Platform independency: Different types of information which the user needs in day-to-day life is spread all over the network on a wide range of platforms. Java EE is well adapted to support, and use this widely spread multiple format information on different platforms easily. Modularity: The development of enterprise applications is complex and needs to be well organized. The complexity of the application can be reduced by dividing it into different, small modules which perform individual tasks, which allows for easy maintenance and testing. They can be organized in separate layers or tiers. These modules interact with each other to perform a business logic. Reusability: Enterprise applications need frequent updates to match up client requirements. Inheritance, the fundamental aspect of an object-oriented approach, offers reusability of the components with the help of functions. Java EE offers modularity which can be used individually whenever required. Scalability: To meet the demands of the growing market, the enterprise application should keep on providing new functionalities to the users. In order to provide these new functionalities, the developers have to change the application. They may add new modules or make changes in already existing ones. Java EE offers well-managed modules which make scalability easy. The technologies used in Java EE are as follows: Java servlet Java Server Pages Enterprise Java Bean Java Messaging API XML Java Transaction API Java Mail Web Services The world of dotcoms In the 1990s, many people started using computers for a number of reasons. For personal use, it was really good. When it came to enterprise use, it was helpful to speed up the work. But one main drawback was; how to share files, data or information? The computers were in a network but if someone wanted to access the data from any computer then they had to access it personally. Sometimes, they had to learn the programs on that computer, which is not only very time-consuming but also unending. What if we can use the existing network to share the data remotely?? It was a thought put forward by a British computer scientist, Sir Tim Berners-Lee. He thought of a way to share the data through the network by exploring an emerging technology called hypertext. In October 1990, Tim wrote three technologies to fulfill sharing using Hyper Text Markup Language (HTML), Uniform Resource Identifier (URI), and Hyper Text Transfer Protocol (HTTP): HTML is a computer language which is used in website creation. Hypertext facilitates clicking on a link to navigate on the Internet. Markups are HTML tags defining what to do with the text they contain. URIs defines a resource by location or name of resource, or both. URIs generally refer to a text document or images. HTTP is the set of rules for transferring the files on the Web. HTTP runs on the top of TCP/IP. He also wrote the first web page browser (World Wide Webapp) and the first web server (HTTP). The web server is where the application is hosted. This opened the doors to the new amazing world of the "dotcom". This was just the beginning and many more technologies have been added to make the Web more realistic. Using HTTP and HTML, people were able to browse files and get content from remote servers. A little bit of user interaction or dynamicity was only possible through JavaScript. People were using the Web but were not satisfied; they needed something more. Something which was able to generate output in a totally dynamic way, maybe displaying the data which had been obtained from the data store. Something which can manipulate user input and accordingly display the results on the browser. Java developed one technology: Common Gateway Interface (CGI). As CGI was a small Java program, it was capable of manipulating the data at the server side and producing the result. When any user made a request, the server forward the edit to CGI, which was an external program. We got an output but with two drawbacks: Each time the CGI script was called, a new process was created. As we were thinking of a huge number of hits to the server, the CGI became a performance hazard. Being an external script, CGI was not capable of taking advantage of server abilities. To add dynamic content which can overcome the above drawbacks and replace CGI, the servletwas developed by Sun in June 1997. Servlet – the dynamicity Servlets are Java programs that generate dynamic output which will be displayed in the browser and hosted on the server. These servers are normally called servlet containers or web servers. These containers are responsible for managing the lifecycle of the servlets and they can take advantage of the capabilities of servers. A single instance of a servlet handles multiple requests through multithreading. This enhances the performance of the application. Let's discuss servlets in depth to understand them better. The servlet is capable of handling the request (input) from the user and generates the response (output) in HTML dynamically. To create a servlet, we have to write a class which will be extended from GenericServlet or HttpServlet. These classes have service() as a method, to handle request and response. The server manages the lifecycle of a servlet as follows: The servlet will be loaded on arrival of a request by the servers. The instance will be created. The init() will be invoked to do the initialization. The preceding steps will be performed only once in the life cycle of the servlet unless the servlet has not been destroyed. After initialization, the thread will be created separately for each request by the server, and request and response objects will be passed to the servlet thread. The server will call the service() function. The service() function will generate a dynamic page and bind it to the HttpResponse object. Once the response is sent back to the user, the thread will be deallocated. From the preceding steps, it is pretty clear that the servlet is responsible for: Reading the user input Manipulating the received input Generating the response A good developer always keeps a rule of thumb in mind that a module should not have more than one responsibility, but here the servlet is doing much more. So this has addressed the first problem in testing the code, maybe we will find a solution for this. But the second issue is about response generation. We cannot neglect a very significant problem in writing well-designed code to have a nice look and feel for the page from the servlet. That means a programmer has to know or adapt designing skills as well, but, why should a servlet be responsible for presentation? The basic thought of taking presentation out of the servlet leads to Java Server Page (JSP). JSP solves the issue of using highly designed HTML pages. JSP provides the facility of using all HTML tags as well as writing logical code using Java. The designers can create well-designed pages using HTML, where programmers can add code using scriptlet, expression, declaration, or directives. Even standard actions like useBean can be used to take advantage of Java Beans. These JSP's now get transformed, compiled into the servlet by the servers. Now we have three components: Controller, which handles request and response Model, which holds data acquired from handling business logic View, which does the presentation Combining these three we have come across a design pattern—Model-View-Controller (MVC). Using MVC design patterns, we are trying to write modules which have a clear separation of work. These modules can be upgradable for future enhancement. These modules can be easily tested as they are less dependent on other modules. The discussion of MVC is incomplete without knowing two architectural flavors of it: MVC I architecture MVC II architecture MVC I architecture In this model, the web application development is page-centric around JSP pages. In MVC I, JSP performs the functionalities of handling a request and response and manipulating the input, as well as producing the output alone. In such web applications, we find a number of JSP pages, each one of them performing different functionalities. MVC I architecture is good for small web applications where less complexity and maintaining the flow is easy. The JSP performs the dual task of business logic and presentation together, which makes it unsuitable for enterprise applications. MVC I architecture MVC II architecture In MVC II, a more powerful model has been put forward to give a solution to enterprise applications with a clear separation of work. It comprises two components: one is the controller and other the view, as compared to MVC I where view and controller is JSP (view). The servlets are responsible for maintaining the flow (the controller) and JSP to present the data (the view). In MVC II, it's easy for developers to develop business logic- the modules which are reusable. MVC II is more flexible due to responsibility separation. MVC II architecture The practical aspect We have traveled a long way. So, instead of moving ahead, let's first develop a web application to accept data from the user and display that using MVC II architecture. We need to perform the following steps: Create a dynamic web application using the name Ch02_HelloJavaEE. Find the servlet-api.jar file from your tomcat/lib folder. Add servlet-api.jar to the lib folder. Create index.jsp containing the form which will accept data from the user. Create a servlet with the name HelloWorldServlet in the com.packt.ch02.servlets package. Declare the method doGet(HttpServletRequestreq,HttpServletResponsers) to perform the following task: Read the request data using the HttpServletRequest object. Set the MIME type. Get an object of PrintWriter. Perform the business logic. Bind the result to the session, application or request scope. Create the view with name hello.jsp under the folder jsps. Configure the servlet in deployment descriptor (DD) for the URL pattern. Use expression language or Java Tag Library to display the model in the JSP page. Let's develop the code. The filesystem for the project is shown in the following screenshot: We have created a web application and added the JARs. Let's now add index.jsp to accept the data from the user: <form action="HelloWorldServlet"> <tr> <td>NAME:</td> <td><input type="text" name="name"></td> </tr> <tr> <td></td> <td><input type="submit" value="ENTER"></td> </tr> </form> When the user submits the form, the request will be sent to the URL HelloWorldServlet. Let's create the HelloWorldServlet which will get invoked for the above URL, which will have doGet(). Create a model with the name message, which we will display in the view. It is time to forward the request with the help of the RequestDispatcher object. It will be done as follows: protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {     // TODO Auto-generated method stub     //read the request parameter     String name=request.getParameter("name");     //get the writer PrintWriter writer=response.getWriter();       //set the MIME type response.setContentType("text/html");       // create a model and set it to the scope of request request.setAttribute("message","Hello "+name +" From JAVA Enterprise"); RequestDispatcher dispatcher=request.getRequestDispatcher("jsps/hello.jsp"); dispatcher.forward(request, response);     } Now create the page hello.jsp under the folder jsps to display the model message as follows: <h2>${message }</h2> The final step is to configure the servlet which we just have created in DD. The configuration is made for the URL HelloWorldServlet as follows: <servlet> <servlet-name>HelloWorldServlet</servlet-name> <servlet-class>com.packt.ch02.servlets.HelloWorldServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>HelloWorldServlet</servlet-name> <url-pattern>/HelloWorldServlet</url-pattern></servlet-mapping> Let's deploy the application to check the output: Displaying the home page for a J2EE application The following screenshot shows the output when a name is entered by the user: Showing the output when a name is entered by the user After developing the above application, we now have a sound knowledge of how web development happens, how to manage the flow, and how navigation happens. We can observe one more thing: that whether it's searching data, adding data, or any other kind of operation, there are certain steps which are common, as follows: Reading the request data Binding this data to a domain object in terms of model data Sending the response We need to perform one or more of the above steps as per the business requirement. Obviously, by only performing the above steps, we will not be able to achieve the end effect but there is no alternative. Let's discuss an example. We want to manage our contact list. We want to have the facilities for adding a new contact, updating a contact, searching one or many contacts, and deleting a contact. The required data will be taken from the user by asking them to fill in a form. Then the data will be persisted in the database. Here, for example, we just want to insert the record in the database. We have to start the coding from reading request data, binding it to an object and then our business operation. The programmers have to unnecessarily repeat these steps. Can't they get rid of them? Is it possible to automate this process?? This is the perfect time to discuss frameworks. What is a framework? A framework is software which gives generalized solutions to common tasks which occur in application development. It provides a platform which can be used by the developers to build up their application elegantly. Advantages of frameworks The advantages of using frameworks are as follows: Faster development Easy binding of request data to a domain object Predefined solutions Validations framework In December 1996, Sun Microsystems published a specification for JavaBean. This specification was about the rules, using which developers can develop reusable, less complex Java components. These POJO classes are now going to be used as a basis for developing a lightweight, less complex, flexible framework: the Spring framework. This framework is from the thoughts of Rod Johnson in February 2003. The Spring framework consists of seven modules: Spring modules Though Spring consists of several modules, the developer doesn't have to be always dependent on the framework. They can use any module as per the requirement. It's not even compulsory to develop the code which has been dependent upon Spring API. It is called a non-intrusive framework. Spring works on the basis of dependency injection (DI), which makes it easy for integration. Each class which the developer develops has some dependencies. Take the example of JDBC: to obtain a connection, the developer needs to provide URL, username, and password values. Obtaining the connection is dependent on these values so we can call them dependencies, and injection of these dependencies in objects is called DI. This makes the emerging spring framework the top choice for the middle tier or business tier in enterprise applications. Spring MVC The spring MVC module is a choice when we look forward for developing web applications. The spring MVC helps to simplify development to develop a robust application. This module can also be used to leave common concerns such as reading request data, data binding to domain object, server-side validation and page rendering to the framework and will concentrate on business logic processes. That's what, as a developer we were looking for. The spring MVC can be integrated with technologies such as Velocity, Freemarker, Excel, and PDF. They can even take advantage of other services such as aspect-oriented programming for cross-cutting technologies, transaction management, and security provided by the framework. The components Let's first try to understand the flow of normal web applications in view of the Spring framework so that it will be easy to discuss the component and all other details: On hitting the URL, the web page will be displayed in the browser. The user will fill in the form and submit it. The front controller intercepts the request. The front controller tries to find the Spring MVC controller and pass the request to it. Business logic will be executed and the generated result is bound to the ModelAndView. The ModelAndView will be sent back to the front controller. The front controller, with the help of ViewResolver, will discover the view, bind the data and send it to the browser. Spring MVC The front controller As already seen in servlet JSP to maintain each flow of the application the developer will develop the servlet and data model from servlet will be forwarded to JSP using attributes. There is no single servlet to maintain the application flow completely. This drawback has been overcome in Spring MVC as it depends on the front controller design pattern. In the front controller design pattern, there will be a single entry point to the application. Whatever URLs are hit by the client, it will be handled by a single piece of the code and then it will delegate the request to the other objects in the application. In Spring MVC, the DispatcherServlet acts as front controller. DispatcherServlet takes the decision about which Spring MVC controller the request will be delegated to. In the case of a single Spring MVC controller in the application, the decision is quite easy. But we know in enterprise applications, there are going to be multiple Spring MVC controllers. Here, the front controller needs help to find the correct Spring MVC controller. The helping hand is the configuration file, where the information to discover the Spring MVC controller is configured using handler mapping. Once the Spring MVC controller is found, the front controller will delegate the request to it. Spring MVC controller All processes, such as the actual business logic, decision making or manipulation of data, happen in the Spring MVC controller. Once this module completes the operation, it will send the view and the model encapsulated in the object normally in the form of ModelAndView to the front controller. The front controller will further resolve the location of the view. The module which helps front controller to obtain the view information is ViewResolver. ModelAndView The object which holds information about the model and view is called as ModelAndView. The model represents the piece of information used by the view for display in the browser of different formats. ViewResolver The Spring MVC controller returns ModelAndView to the front controller. The ViewResolver interface helps to map the logical view name to the actual view. In web applications, data can be displayed in a number of formats, from as simple as JSP to complicated formats like JasperReport. Spring provides InternalResourceViewResolver, JspViewResolver, JasperReportsViewResolver, VelocityLayoutViewResolver, and so on, to support different view formats. The configuration file DispatcherServlet needs to discover information about the Spring MVC controller, ViewResolver, and many more. All this information is centrally configured in a file named XXX-servlet.xml where XXX is the name of the front controller. Sometimes the beans will be distributed across multiple configuration files. In this case, extra configuration has to be made, which we will see later in this article. The basic configuration file will be: <beans xsi_schemaLocation="    http://www.springframework.org/schema/beans        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <!—mapping of the controller --> <!—bean to be configured here for view resolver  - -> </beans> The controller configuration file will be named name_of_servlet-servlet.xml. In our project, we will name this HelloWeb-servlet.xml. Let's do the basics of a web application using Spring MVC to accept the data and display it. We need to perform the following steps: Create a web application named Ch02_HelloWorld. Add the required JAR files for Spring (as shown in the following screenshot) and servlets in the lib folder. Create an index page from where the data can be collected from the user and a request sent to the controller. Configure the front controller in DD. Create a SpringMVCcontroller as HelloWorldController. Add a method for accepting requests in the controller which performs business logic, and sends the view name and model name along with its value to the front controller. Create an XML file in WEB-INF as Front_Controller_name-servlet.xml and configure SpringMVCcontroller and ViewResolver. Create a JSP which acts as a view to display the data with the help of Expression Language (EL) and Java Standard Tag Library (JSTL). Let's create the application. The filesystem for the project is as follows: We have already created the dynamic web project Ch02_HelloSpring and added the required JAR files in lib folder. Let's start by creating index.jsp page as: <form action="hello.htm"> <tr> <td>NAME:</td> <td><input type="text" name="name"></td> </tr> <tr> <td></td> <td><input type="submit" value="ENTER"></td> </tr> </form>   When we submit the form, the request is sent to the resource which is mapped for the URL hello.htm. Spring MVC follows the front controller design pattern. So all the requests hitting the application will be first attended by the front controller and then it will send it to the respective Spring controllers. The front controller is mapped in DD as: <servlet> <servlet-name>HelloSpring</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>HelloSpring</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> Now the controller needs help to find the Spring MVC controller. This will be taken care of by the configuration file. This file will have the name XXX-servlet.xml where XXX is replaced by the name of the front controller from DD. Here, in this case HelloSpring-servlet.xml will have the configuration. This file we need to keep in the WEB-INF folder. In the Configuration files section, we saw the structure of the file. In this file, the mapping will be done to find out how the package in which the controllers are kept will be configured. This is done as follows: <context:component-scan base-package="com.packt.ch02.controllers" /> Now the front controller will find the controller from the package specified as a value of base-package attribute. The front controller will now visit HelloController. This class has to be annotated by @Controller: @Controller public class HelloController { //code here } Once the front controller knows what the controller class is, the task of finding the appropriate method starts. This will be done by matching the values of @RequestMapping annotation applied either on the class or on the methods present in the class. In our case, the URL mapping is hello.htm. So the method will be developed as: @RequestMapping(value="/hello.htm") publicModelAndViewsayHello(HttpServletRequest request)   {     String name=request.getParameter("name"); ModelAndView mv=new ModelAndView(); mv.setViewName("hello");     String message="Hello "+name +" From Spring"; mv.addObject("message",message); return mv;   } This method will return a ModelAndView object which contains a view name, model name and value for the model. In our code the view name is hello and the model is presented by message. The Front Controller now again uses HelloSpring-servlet.xml for finding the ViewResolver to get the actual name and location of the view. ViewResolver will provide the directory name (location) where the view is placed with a property prefix. The format of the view is given by the property suffix. Using the view name, prefix and suffix, the front controller gets the page. The ViewResolver will bind the model to be used in the view: <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsps/" /> <property name="suffix" value=".jsp" /> </bean> In our case, it will be /WEB-INF/jsps/ as prefix, hello as the name of page, and .jsp is the suffix value. Combining them, we will get /WEB-INF/jsps/hello.jsp, which acts as our view. The Actual view is written as prefix+view_name from ModelAndView+suffix, for instance: /WEB-INF/jsps/+hello+.jsp The data is bounded by the front controller and the view will be able to use it: <h2>${message}</h2>. This page is now ready to be rendered by the browser, which will give output in the browser as: Displaying the home page for a Spring application Entering the name in the text field (for example, Bob) and submitting the form gives the following output: Showing an output when a name is entered by the user Now we understand the working of spring MVC, let's discuss a few more things required in order to develop the Spring MVC controller. Each class which we want to discover as the controller should be annotated with the @Controller annotation. In this class, there may be number of methods which can be invoked on request. The method which we want to map for URL has to be annotated with the annotation @RequestMapping. There can be more than one method mapped for the same URL but it will be invoked for different HTTP methods. This can be done as follows: @RequestMapping(value="/hello.htm",method= RequestMethod.GET) publicModelAndViewsayHello(HttpServletRequest request)   {       } @RequestMapping(value="/hello.htm",method= RequestMethod.POST) publicModelAndViewsayHello(HttpServletRequest request)   {      } These methods normally accept Request as parameter and will return ModelAndView. But the following return types and parameters are also supported. The following are some of the supported method argument types: HttpServletRequest HttpSession Java.util.Map/ org.springframework.ui.Model/ org.springframework.ui.ModelMap @PathVariable @RequestParam org.springframework.validation.Errors/ org.springframework.validation.BindingResult The following are some of the supported method return types: ModelAndView Model Map View String void Sometimes the bean configuration is scattered in more than one file. For example, we can have controller configuration in one file and database, security-related configuration in a separate file. In that case, we have to add extra configuration in DD to load multiple configuration files, as follows: <servlet> <servlet-name>HelloSpring</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/beans.xml</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>HelloSpring</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> Summary In this article, we learned how the application will be single-handedly controlled by the front controller, the dispatcher servlet. The actual work of performing business logic and giving the name of the view, data model back will be done by the Spring MVC controller. The view will be resolved by the front controller with the help of the respective ViewResolver. The view will display the data got from the Spring MVC controller. To understand and explore Spring MVC, we need to understand the web layer, business logic layer and data layer in depth using the basics of Spring MVC discussed in this article. Resources for Article:   Further resources on this subject: Working with Spring Tag Libraries [article] Working with JIRA [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 1830
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-rapid-application-development-django-openduty-story
Bálint Csergő
01 Aug 2016
5 min read
Save for later

Rapid Application Development with Django, the Openduty story

Bálint Csergő
01 Aug 2016
5 min read
Openduty is an open source incident escalation tool, which is something like Pagerduty but free and much simpler. It was born during a hackathon at Ustream back in 2014. The project received a lot of attention in the devops community, and was also featured in Devops weekly andPycoders weekly.It is listed at Full Stack Python as an example Django project. This article is going to include some design decisions we made during the hackathon, and detail some of the main components of the Opendutysystem. Design When we started the project, we already knew what we wanted to end up with: We had to work quickly—it was a hackathon after all An API similar to Pagerduty Ability to send notifications asynchronously A nice calendar to organize on—call schedules can’t hurt anyone, right? Tokens for authorizing notifiers So we chose the corresponding components to reach our goal. Get the job done quickly If you have to develop apps rapidly in Python, Django is the framework you choose. It's a bit heavyweight, but hey, it gives you everything you need and sometimes even more. Don't get me wrong; I'm a big fan of Flask also, but it can be a bit fiddly to assemble everything by hand at the start. Flask may pay off later, and you may win on a lower amount of dependencies, but we only had 24 hours, so we went with Django. An API When it comes to Django and REST APIs, one of the GOTO soluitions is The Django REST Framework. It has all the nuts and bolts you'll need when you're assembling an API, like serializers, authentication, and permissions. It can even give you the possibility to make all your API calls self-describing. Let me show you howserializers work in the Rest Framework. class OnCallSerializer(serializers.Serializer): person = serializers.CharField() email = serializers.EmailField() start = serializers.DateTimeField() end = serializers.DateTimeField() The code above represents a person who is on-call on the API. As you can see, it is pretty simple; you just have to define the fields. It even does the validation for you, since you have to give a type to every field. But believe me, it's capable of more good things like generating a serializer from your Django model: class SchedulePolicySerializer(serializers.HyperlinkedModelSerializer): rules = serializers.RelatedField(many=True, read_only=True) class Meta: model = SchedulePolicy fields = ('name', 'repeat_times', 'rules') This example shows how you can customize a ModelSerializer, make fields read-only, and only accept given fields from an API call. Async Task Execution When you have tasks that are long-running, such as generating huge reports, resizing images, or even transcoding some media, it is a common practice thatyou must move the actual execution of those out of your webapp into a separate layer. This decreases the load on the webservers, helps in avoiding long or even timing out requests, and just makes your app more resilient and scalable. In the Python world, the go-to solution for asynchronous task execution is called Celery. In Openduty, we use Celery heavily to send notifications asynchronously and also to delay the execution of any given notification task by the delay defined in the service settings. Defining a task is this simple: @app.task(ignore_result=True) def send_notifications(notification_id): try: notification = ScheduledNotification.objects.get(id = notification_id) if notification.notifier == UserNotificationMethod.METHOD_XMPP: notifier = XmppNotifier(settings.XMPP_SETTINGS) #choosing notifier removed from example code snippet notifier.notify(notification) #logging task result removed from example snippet raise And calling an already defined task is also almost as simple as calling any regular function: send_notifications.apply_async((notification.id,) ,eta=notification.send_at) This means exactly what you think: Send the notification with the id: notification.id at notification.send_at. But how do these things get executed? Under the hood, Celery wraps your decorated functions so that when you call them, they get enqueued instead of being executed directly. When the celery worker detects that there is a task to be executed, it simply takes it from the queue and executes it asynchronously. Calendar We use django-scheduler for the awesome-looking calendar in Openduty. It is a pretty good project generally, supports recurring events, and provides you with a UI for your calendar, so you won't even have to fiddle with that. Tokens and Auth Service token implementation is a simple thing. You want them to be unique, and what else would you choose if not aUUID? There is a nice plugin for Django models used to handle UUID fields, called django-uuidfield. It just does what it says—addingUUIDField support to your models. User authentication is a bit more interesting, so we currently support plain Django Users, and you can use LDAP as your user provider. Summary This was just a short summary about the design decisions made when we coded Openduty. I also demonstrated the power of the components through some snippets that are relevant. If you are on a short deadline, consider using Django and its extensions. There is a good chance that somebody has already done what you need to do, or something similar, which can always be adapted to your needs thanks to the awesome power of the open source community. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He lovesUnix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 3496

article-image-debugging-your-net-application
Packt
21 Jul 2016
13 min read
Save for later

Debugging Your .NET Application

Packt
21 Jul 2016
13 min read
In this article by Jeff Martin, author of the book Visual Studio 2015 Cookbook - Second Edition, we will discuss about how but modern software development still requires developers to identify and correct bugs in their code. The familiar edit-compile-test cycle is as familiar as a text editor, and now the rise of portable devices has added the need to measure for battery consumption and optimization for multiple architectures. Fortunately, our development tools continue to evolve to combat this rise in complexity, and Visual Studio continues to improve its arsenal. (For more resources related to this topic, see here.) Multi-threaded code and asynchronous code are probably the two most difficult areas for most developers to work with, and also the hardest to debug when you have a problem like a race condition. A race condition occurs when multiple threads perform an operation at the same time, and the order in which they execute makes a difference to how the software runs or the output is generated. Race conditions often result in deadlocks, incorrect data being used in other calculations, and random, unrepeatable crashes. The other painful area to debug involves code running on other machines, whether it is running locally on your development machine or running in production. Hooking up a remote debugger in previous versions of Visual Studio has been less than simple, and the experience of debugging code in production was similarly frustrating. In this article, we will cover the following sections: Putting Diagnostic Tools to work Maximizing everyday debugging Putting Diagnostic Tools to work In Visual Studio 2013, Microsoft debuted a new set of tools called the Performance and Diagnostics hub. With VS2015, these tools have revised further, and in the case of Diagnostic Tools, promoted to a central presence on the main IDE window, and is displayed, by default, during debugging sessions. This is great for us as developers, because now it is easier than ever to troubleshoot and improve our code. In this section, we will explore how Diagnostic Tools can be used to explore our code, identify bottlenecks, and analyze memory usage. Getting ready The changes didn't stop when VS2015 was released, and succeeding updates to VS2015 have further refined the capabilities of these tools. So for this section, ensure that Update 2 has been installed on your copy of VS2015. We will be using Visual Studio Community 2015, but of course, you may use one of the premium editions too. How to do it… For this section, we will put together a short program that will generate some activity for us to analyze: Create a new C# Console Application, and give it a name of your choice. In your project's new Program.cs file, add the following method that will generate a large quantity of strings: static List<string> makeStrings() { List<string> stringList = new List<string>(); Random random = new Random(); for (int i = 0; i < 1000000; i++) { string x = "String details: " + (random.Next(1000, 100000)); stringList.Add(x); } return stringList; } Next we will add a second static method that produces an SHA256-calculated hash of each string that we generated. This method reads in each string that was previously generated, creates an SHA256 hash for it, and returns the list of computed hashes in the hex format. static List<string> hashStrings(List<string> srcStrings) { List<string> hashedStrings = new List<string>(); SHA256 mySHA256 = SHA256Managed.Create(); StringBuilder hash = new StringBuilder(); foreach (string str in srcStrings) { byte[] srcBytes = mySHA256.ComputeHash(Encoding.UTF8.GetBytes(str), 0, Encoding.UTF8.GetByteCount(str)); foreach (byte theByte in srcBytes) { hash.Append(theByte.ToString("x2")); } hashedStrings.Add(hash.ToString()); hash.Clear(); } mySHA256.Clear(); return hashedStrings; } After adding these methods, you may be prompted to add using statements for System.Text and System.Security.Cryptography. These are definitely needed, so go ahead and take Visual Studio's recommendation to have them added. Now we need to update our Main method to bring this all together. Update your Main method to have the following: static void Main(string[] args) { Console.WriteLine("Ready to create strings"); Console.ReadKey(true); List<string> results = makeStrings(); Console.WriteLine("Ready to Hash " + results.Count() + " strings "); //Console.ReadKey(true); List<string> strings = hashStrings(results); Console.ReadKey(true); } Before proceeding, build your solution to ensure everything is in working order. Now run the application in the Debug mode (F5), and watch how our program operates. By default, the Diagnostic Tools window will only appear while debugging. Feel free to reposition your IDE windows to make their presence more visible or use Ctrl + Alt + F2 to recall it as needed. When you first launch the program, you will see the Diagnostic Tools window appear. Its initial display resembles the following screenshot. Thanks to the first ReadKey method, the program will wait for us to proceed, so we can easily see the initial state. Note that CPU usage is minimal, and memory usage holds constant. Before going any further, click on the Memory Usage tab, and then the Take Snapshot command as indicated in the preceding screenshot. This will record the current state of memory usage by our program, and will be a useful comparison point later on. Once a snapshot is taken, your Memory Usage tab should resemble the following screenshot: Having a forced pause through our ReadKey() method is nice, but when working with real-world programs, we will not always have this luxury. Breakpoints are typically used for situations where it is not always possible to wait for user input, so let's take advantage of the program's current state, and set two of them. We will put one to the second WriteLine method, and one to the last ReadKey method, as shown in the following screenshot: Now return to the open application window, and press a key so that execution continues. The program will stop at the first break point, which is right after it has generated a bunch of strings and added them to our List object. Let's take another snapshot of the memory usage using the same manner given in Step 9. You may also notice that the memory usage displayed in the Process Memory gauge has increased significantly, as shown in this screenshot: Now that we have completed our second snapshot, click on Continue in Visual Studio, and proceed to the next breakpoint. The program will then calculate hashes for all of the generated strings, and when this has finished, it will stop at our last breakpoint. Take another snapshot of the memory usage. Also take notice of how the CPU usage spiked as the hashes were being calculated: Now that we have these three memory snapshots, we will examine how they can help us. You may notice how memory usage increases during execution, especially from the initial snapshot to the second. Click on the second snapshot's object delta, as shown in the following screenshot: On clicking, this will open the snapshot details in a new editor window. Click on the Size (Bytes) column to sort by size, and as you may suspect, our List<String> object is indeed the largest object in our program. Of course, given the nature of our sample program, this is fairly obvious, but when dealing with more complex code bases, being able to utilize this type of investigation is very helpful. The following screenshot shows the results of our filter: If you would like to know more about the object itself (perhaps there are multiple objects of the same type), you can use the Referenced Types option as indicated in the preceding screenshot. If you would like to try this out on the sample program, be sure to set a smaller number in the makeStrings() loop, otherwise you will run the risk of overloading your system. Returning to the main Diagnostic Tools window, we will now examine CPU utilization. While the program is executing the hashes (feel free to restart the debugging session if necessary), you can observe where the program spends most of its time: Again, it is probably no surprise that most of the hard work was done in the hashStrings() method. But when dealing with real-world code, it will not always be so obvious where the slowdowns are, and having this type of insight into your program's execution will make it easier to find areas requiring further improvement. When using the CPU profiler in our example, you may find it easier to remove the first breakpoint and simply trigger a profiling by clicking on Break All as shown in this screenshot: How it works... Microsoft wanted more developers to be able to take advantage of their improved technology, so they have increased its availability beyond the Professional and Enterprise editions to also include Community. Running your program within VS2015 with the Diagnostic Tools window open lets you examine your program's performance in great detail. By using memory snapshots and breakpoints, VS2015 provides you with the tools needed to analyze your program's operation, and determine where you should spend your time making optimizations. There's more… Our sample program does not perform a wide variety of tasks, but of course, more complex programs usually perform well. To further assist with analyzing those programs, there is a third option available to you beyond CPU Usage and Memory Usage: the Events tab. As shown in the following screenshot, the Events tab also provides the ability to search events for interesting (or long-running) activities. Different event types include file activity, gestures (for touch-based apps), and program modules being loaded or unloaded. Maximizing everyday debugging Given the frequency of debugging, any refinement to these tools can pay immediate dividends. VS 2015 brings the popular Edit and Continue feature into the 21st century by supporting a 64-bit code. Added to that is the new ability to see the return value of functions in your debugger. The addition of these features combine to make debugging code easier, allowing to solve problems faster. Getting ready For this section, you can use VS 2015 Community or one of the premium editions. Be sure to run your choice on a machine using a 64-bit edition of Windows, as that is what we will be demonstrating in the section. Don't worry, you can still use Edit and Continue with 32-bit C# and Visual Basic code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU, and select Configuration Manager... When the Configuration Manager dialog opens, we can create a new project platform targeting a 64-bit code. To do this, click on the drop-down menu for Platform, and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to Configuration Manager. Verify that x64 remains active under Platform, and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: With the project settings out of the face, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border, or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method, and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use whatever method you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5, or by clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation, as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). (If you don't see Autos, it can be made visible by pressing Ctrl + D, A, or through Debug | Windows | Autos while debugging.) After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for the area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet in the following screenshot—Step Over once more if you would like to see that assigned): There's more… Programmers who write C++ have already had the ability to see the return values of functions—this just brings .NET developers into the fold. The result is that your development experience won't have to suffer based on the language you have chosen to use for your project. The Edit and Continue functionality is also available for ASP.NET projects. New projects created on VS2015 will have Edit and Continue enabled by default. Existing projects imported to VS2015 will usually need this to be enabled if it hasn't been done already. To do so, open the Options dialog via Tools | Options, and look for the Debugging | General section. The following screenshot shows where this option is located on the properties page: Whether you are working with an ASP.NET project or a regular C#/VB .NET application, you can verify Edit and Continue is set via this location. Summary In this article, we examine the improvements to the debugging experience in Visual Studio 2015, and how it can help you diagnose the root cause of a problem faster so that you can fix it properly, and not just patch over the symptoms. Resources for Article:   Further resources on this subject: Creating efficient reports with Visual Studio [article] Creating efficient reports with Visual Studio [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article]
Read more
  • 0
  • 0
  • 3695

article-image-managing-eap-domain-mode
Packt
19 Jul 2016
7 min read
Save for later

Managing EAP in Domain Mode

Packt
19 Jul 2016
7 min read
This article by Francesco Marchioni author of the book Mastering JBoss Enterprise Application Platform 7dives deep into application server management using the domain mode, its main components, and discusses how to shift to advanced configurations that resemble real-world projects. Here are the main topics covered are: Domain mode breakdown Handy domainproperties Electing the domaincontroller (For more resources related to this topic, see here.) Domain mode break down Managing the application server in the domain mode means, in a nutshell, to control multiple servers from a centralized single point of control. The servers that are part of the domain can span across multiple machines (or even across the cloud) and they can be grouped with similar servers of the domain to share a common configuration. To make some rationale, we will break down the domain components into two main categories: Physical components: Theseare the domain elements that can be identified with a Java process running on the operating system Logical components: Theseare the domain elements which can span across several physical components Domain physical components When you start the application server through the domain.sh script, you will be able to identify the following processes: Host controller: Each domain installation contains a host controller. This is a Java process that is in charge to start and stop the servers that are defined within the host.xml file. The host controller is only aware of the items that are specific to the local physical installation such as the domaincontroller host and port, the JVM settings of the servers or their system properties. Domain controller: One host controller of the domain (and only one) is configured to act as domaincontroller. This means basically two things: keeping the domainconfiguration (into the domain.xml file) and assisting the host controller for managing the servers of the domain. Servers: Each host controller can contain any number of servers which are the actual server instances. These server instances cannot be started autonomously. The host controller is in charge to start/stop single servers, when the domaincontroller commands them. If you start the default domain configuration on a Linux machine, you will see that the following processes will show in your operating system: As you can see, the process controller is identified by the [Process Controller] label, while the domaincontroller corresponds to the [Host Controller] label. Each server shows in the process table with the name defined in the host.xml file. You can use common operating system commands such as grep to further restrict the search to a specific process. Domain logical components A domain configuration with only physical elements in it would not add much to a line of standalone servers. The following components can abstract the domain definition, making it dynamic and flexible: Server Group: A server group is a collection of servers. They are defined in the domain.xml file, hence they don't have any reference to an actual host controller installation. You can use a server group to share configuration and deployments across a group of servers. Profile: A profile is an EAP configuration. A domain can hold as many profiles as you need. Out of the box the following configurations are provided: default: This configuration matches with the standalone.xml configuration (in standalone mode) hence it does not include JMS, IIOP, or HA. full: This configuration matches with the standalone-full.xml configuration (in standalone mode) hence it includes JMS and OpenJDK IIOP to the default server. ha: This configuration matches with the standalone-ha.xml configuration (in standalone mode) so it enhances the default configuration with clustering (HA). full-ha: This configuration matches with the standalone-full-ha.xml configuration (in standalone mode), hence it includes JMS, IIOP, and HA. Handy domainproperties So far we have learnt the default configuration files used by JBoss EAP and the location where they are placed. These settings can be however varied by means of system properties. The following table shows how to customize the domain configuration file names: Option Description --domain-config The domain configuration file (default domain.xml) --host-config The host configuration file (default host.xml) On the other hand, this table summarizes the available options to adjust the domain directory structure: Property Description jboss.domain.base.dir The base directory for domain content jboss.domain.config.dir The base configuration directory jboss.domain.data.dir The directory used for persistent data file storage jboss.domain.log.dir The directory containing the host-controller.log and process-controller.log files jboss.domain.temp.dir The directory used for temporary file storage jboss.domain.deployment.dir The directory used to store deployed content jboss.domain.servers.dir The directory containing the managed server instances For example, you can start EAP 7 in domain mode using the domain configuration file mydomain.xml and the host file named myhost.xml based on the base directory /home/jboss/eap7domain using the following command: $ ./domain.sh –domain-config=mydomain.xml –host-config=myhost.xml –Djboss.domain.base.dir=/home/jboss/eap7domain Electing the domaincontroller Before creating your first domain, we will learn more in detail the process which connects one or more host controller to one domaincontroller and how to elect a host controller to be a domaincontroller. The physical topology of the domain is stored in the host.xml file. Within this file, you will find as the first line the Host Controller name, which makes each host controller unique: <host name="master"> One of the host controllers will be configured to act as a domaincontroller. This is done in the domain-controller section with the following block, which states that the domaincontroller is the host controller itself (hence, local): <domain-controller> <local/> </domain-controller> All other host controllers will connect to the domaincontroller, using the following example configuration which uses the jboss.domain.master.address and jboss.domain.master.port properties to specify the domaincontroller address and port: <domain-controller> <remote protocol="remote" host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/> </domain-controller> The host controller-domaincontroller communication happens behind the scenes through a management native port that is defined as well into the host.xml file: <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket interface="management" port="${jboss.management.native.port:9999}"/> </native-interface> <http-interface security-realm="ManagementRealm" http-upgrade-enabled="true"> <socket interface="management" port="${jboss.management.http.port:9990}"/> </http-interface> </management-interfaces> The other highlighted attribute is the managementhttpport that can be used by the administrator to reach the domaincontroller. This port is especially relevant if the host controller is the domaincontroller. Both sockets use the management interface, which is defined in the interfaces section of the host.xml file, and exposes the domain controller on a network available address: <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> </interfaces> If you want to run multiplehost controllers on the same machine, you need to provide a unique jboss.management.native.port for each host controller or a different jboss.bind.address.management. Summary In this article we have some essentials of the domain mode breakdown, handy domain propertiesand also electing the domain controller. Resources for Article: Further resources on this subject: Red5: A video-on-demand Flash Server [article] Animating Elements [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 2351

article-image-reactive-programming-c
Packt
18 Jul 2016
30 min read
Save for later

Reactive Programming with C#

Packt
18 Jul 2016
30 min read
In this article by Antonio Esposito from the book Reactive Programming for .NET Developers , we will see a practical example of what is reactive programming with pure C# coding. The following topics will be discussed here: IObserver interface IObservable interface Subscription life cycle Sourcing events Filtering events Correlating events Sourcing from CLR streams Sourcing from CLR enumerables (For more resources related to this topic, see here.) IObserver interface This core level interface is available within the Base Class Library (BCL) of .NET 4.0 and is available for the older 3.5 as an add-on. The usage is pretty simple and the goal is to provide a standard way of handling the most basic features of any reactive message consumer. Reactive messages flow by a producer and a consumer and subscribe for some messages. The IObserver C# interface is available to construct message receivers that comply with the reactive programming layout by implementing the three main message-oriented events: a message received, an error received, and a task completed message. The IObserver interface has the following sign and description: // Summary: // Provides a mechanism for receiving push-based notifications. // // Type parameters: // T: // The object that provides notification information.This type parameter is // contravariant. That is, you can use either the type you specified or any // type that is less derived. For more information about covariance and contravariance, // see Covariance and Contravariance in Generics. public interface IObserver<in T> { // Summary: // Notifies the observer that the provider has finished sending push-based notifications. void OnCompleted(); // // Summary: // Notifies the observer that the provider has experienced an error condition. // // Parameters: // error: // An object that provides additional information about the error. void OnError(Exception error); // // Summary: // Provides the observer with new data. // // Parameters: // value: // The current notification information. void OnNext(T value); } Any new message to flow to the receiver implementing such an interface will reach the OnNext method. Any error will reach the OnError method, while the task completed acknowledgement message will reach the OnCompleted method. The usage of an interface means that we cannot use generic premade objects from the BCL. We need to implement any receiver from scratch by using such an interface as a service contract. Let's see an example because talking about a code example is always simpler than talking about something theoretic. The following examples show how to read from a console application command from a user in a reactive way: cass Program { static void Main(string[] args) { //creates a new console input consumer var consumer = new ConsoleTextConsumer(); while (true) { Console.WriteLine("Write some text and press ENTER to send a messagernPress ENTER to exit"); //read console input var input = Console.ReadLine(); //check for empty messate to exit if (string.IsNullOrEmpty(input)) { //job completed consumer.OnCompleted(); Console.WriteLine("Task completed. Any further message will generate an error"); } else { //route the message to the consumer consumer.OnNext(input); } } } } public class ConsoleTextConsumer : IObserver<string> { private bool finished = false; public void OnCompleted() { if (finished) { OnError(new Exception("This consumer already finished it's lifecycle")); return; } finished = true; Console.WriteLine("<- END"); } public void OnError(Exception error) { Console.WriteLine("<- ERROR"); Console.WriteLine("<- {0}", error.Message); } public void OnNext(string value) { if (finished) { OnError(new Exception("This consumer finished its lifecycle")); return; } //shows the received message Console.WriteLine("-> {0}", value); //do something //ack the caller Console.WriteLine("<- OK"); } } The preceding example shows the IObserver interface usage within the ConsoleTextConsumer class that simply asks a command console (DOS-like) for the user input text to do something. In this implementation, the class simply writes out the input text because we simply want to look at the reactive implementation. The first important concept here is that a message consumer knows nothing about how messages are produced. The consumer simply reacts to one of the tree events (not CLR events). Besides this, some kind of logic and cross-event ability is also available within the consumer itself. In the preceding example, we can see that the consumer simply showed any received message again on the console. However, if a complete message puts the consumer in a finished state (by signaling the finished flag), any other message that comes on the OnNext method will be automatically routed to the error one. Likewise, any other complete message that will reach the consumer will produce another error once the consumer is already in the finished state. IObservable interface The IObservable interface, the opposite of the IObserver interface, has the task of handling message production and the observer subscription. It routes right messages to the OnNext message handler and errors to the OnError message handler. At its life cycle end, it acknowledges all the observers on the OnComplete message handler. To create a valid reactive observable interface, we must write something that is not locking against user input or any other external system input data. The observable object acts as an infinite message generator, something like an infinite enumerable of messages; although in such cases, there is no enumeration. Once a new message is available somehow, observer routes it to all the subscribers. In the following example, we will try creating a console application to ask the user for an integer number and then route such a number to all the subscribers. Otherwise, if the given input is not a number, an error will be routed to all the subscribers. This is observer similar to the one already seen in the previous example. Take a look at the following codes: /// <summary> /// Consumes numeric values that divides without rest by a given number /// </summary> public class IntegerConsumer : IObserver<int> { readonly int validDivider; //the costructor asks for a divider public IntegerConsumer(int validDivider) { this.validDivider = validDivider; } private bool finished = false; public void OnCompleted() { if (finished) OnError(new Exception("This consumer already finished it's lifecycle")); else { finished = true; Console.WriteLine("{0}: END", GetHashCode()); } } public void OnError(Exception error) { Console.WriteLine("{0}: {1}", GetHashCode(), error.Message); } public void OnNext(int value) { if (finished) OnError(new Exception("This consumer finished its lifecycle")); //the simple business logic is made by checking divider result else if (value % validDivider == 0) Console.WriteLine("{0}: {1} divisible by {2}", GetHashCode(), value, validDivider); } } This observer consumes integer numeric messages, but it requires that the number is divisible by another one without producing any rest value. This logic, because of the encapsulation principle, is within the observer object. The observable interface, instead, only has the logic of the message sending of valid or error messages. This filtering logic is made within the receiver itself. Although that is not something wrong, in more complex applications, specific filtering features are available in the publish-subscribe communication pipeline. In other words, another object will be available between observable (publisher) and observer (subscriber) that will act as a message filter. Back to our numeric example, here we have the observable implementation made using an inner Task method that does the main job of parsing input text and sending messages. In addition, a cancellation token is available to handle the user cancellation request and an eventual observable dispose: //Observable able to parse strings from the Console //and route numeric messages to all subscribers public class ConsoleIntegerProducer : IObservable<int>, IDisposable { //the subscriber list private readonly List<IObserver<int>> subscriberList = new List<IObserver<int>>(); //the cancellation token source for starting stopping //inner observable working thread private readonly CancellationTokenSource cancellationSource; //the cancellation flag private readonly CancellationToken cancellationToken; //the running task that runs the inner running thread private readonly Task workerTask; public ConsoleIntegerProducer() { cancellationSource = new CancellationTokenSource(); cancellationToken = cancellationSource.Token; workerTask = Task.Factory.StartNew(OnInnerWorker, cancellationToken); } //add another observer to the subscriber list public IDisposable Subscribe(IObserver<int> observer) { if (subscriberList.Contains(observer)) throw new ArgumentException("The observer is already subscribed to this observable"); Console.WriteLine("Subscribing for {0}", observer.GetHashCode()); subscriberList.Add(observer); return null; } //this code executes the observable infinite loop //and routes messages to all observers on the valid //message handler private void OnInnerWorker() { while (!cancellationToken.IsCancellationRequested) { var input = Console.ReadLine(); int value; foreach (var observer in subscriberList) if (string.IsNullOrEmpty(input)) break; else if (input.Equals("EXIT")) { cancellationSource.Cancel(); break; } else if (!int.TryParse(input, out value)) observer.OnError(new FormatException("Unable to parse given value")); else observer.OnNext(value); } cancellationToken.ThrowIfCancellationRequested(); } //cancel main task and ack all observers //by sending the OnCompleted message public void Dispose() { if (!cancellationSource.IsCancellationRequested) { cancellationSource.Cancel(); while (!workerTask.IsCanceled) Thread.Sleep(100); } cancellationSource.Dispose(); workerTask.Dispose(); foreach (var observer in subscriberList) observer.OnCompleted(); } //wait until the main task completes or went cancelled public void Wait() { while (!(workerTask.IsCompleted || workerTask.IsCanceled)) Thread.Sleep(100); } } To complete the example, here there is the program Main: static void Main(string[] args) { //this is the message observable responsible of producing messages using (var observer = new ConsoleIntegerProducer()) //those are the message observer that consume messages using (var consumer1 = observer.Subscribe(new IntegerConsumer(2))) using (var consumer2 = observer.Subscribe(new IntegerConsumer(3))) using (var consumer3 = observer.Subscribe(new IntegerConsumer(5))) observer.Wait(); Console.WriteLine("END"); Console.ReadLine(); } The cancellationToken.ThrowIfCancellationRequested may raise an exception in your Visual Studio when debugging. Simply go next by pressing F5, or test such code example without the attached debugger by starting the test with Ctrl + F5 instead of the F5 alone. The application simply creates an observable variable, which is able to parse user data. Then, register three observers specifying to each observer variables the wanted valid divider value. Then, the observable variable will start reading user data from the console and valid or error messages will flow to all the observers. Each observer will apply its internal logic of showing the message when it divides for the related divider. Here is the result of executing the application: Observables and observers in action Subscription life cycle What will happen if we want to stop a single observer from receiving messages from the observable event source? If we change the program Main from the preceding example to the following one, we could experience a wrong observer life cycle design. Here's the code: //this is the message observable responsible of producing messages using (var observer = new ConsoleIntegerProducer()) //those are the message observer that consume messages using (var consumer1 = observer.Subscribe(new IntegerConsumer(2))) using (var consumer2 = observer.Subscribe(new IntegerConsumer(3))) { using (var consumer3 = observer.Subscribe(new IntegerConsumer(5))) { //internal lifecycle } observer.Wait(); } Console.WriteLine("END"); Console.ReadLine(); Here is the result in the output console: The third observer unable to catch value messages By using the using construct method, we should stop the life cycle of the consumer object. However, we do not, because in the previous example, the Subscribe method of the observable simply returns a NULL object. To create a valid observer, we must handle and design its life cycle management. This means that we must eventually handle the external disposing of the Subscribe method's result by signaling the right observer that his life cycle reached the end. We have to create a Subscription class to handle an eventual object disposing in the right reactive way by sending the message for the OnCompleted event handler. Here is a simple Subscription class implementation: /// <summary> /// Handle observer subscription lifecycle /// </summary> public sealed class Subscription<T> : IDisposable { private readonly IObserver<T> observer; public Subscription(IObserver<T> observer) { this.observer = observer; } //the event signalling that the observer has //completed its lifecycle public event EventHandler<IObserver<T>> OnCompleted; public void Dispose() { if (OnCompleted != null) OnCompleted(this, observer); observer.OnCompleted(); } } The usage is within the observable Subscribe method. Here's an example: //add another observer to the subscriber list public IDisposable Subscribe(IObserver<int> observer) { if (observerList.Contains(observer)) throw new ArgumentException("The observer is already subscribed to this observable"); Console.WriteLine("Subscribing for {0}", observer.GetHashCode()); observerList.Add(observer); //creates a new subscription for the given observer var subscription = new Subscription<int>(observer); //handle to the subscription lifecycle end event subscription.OnCompleted += OnObserverLifecycleEnd; return subscription; } void OnObserverLifecycleEnd(object sender, IObserver<int> e) { var subscription = sender as Subscription<int>; //remove the observer from the internal list within the observable observerList.Remove(e); //remove the handler from the subscription event //once already handled subscription.OnCompleted -= OnObserverLifecycleEnd; } As visible, the preceding example creates a new Subscription<T> object to handle this observer life cycle with the IDisposable.Dispose method. Here is the result of such code edits against the full example available in the previous paragraph: The observer will end their life as we dispose their life cycle tokens This time, an observer ends up its life cycle prematurely by disposing the subscription object. This is visible by the first END message. Later, only two observers remain available at the application ending; when the user asks for EXIT, only such two observers end their life cycle by themselves rather than by the Subscription disposing. In real-world applications, often, observers subscribe to observables and later unsubscribe by disposing the Subscription token. This happens because we do not always want a reactive module to handle all the messages. In this case, this means that we have to handle the observer life cycle by ourselves, as we already did in the previous examples, or we need to apply filters to choose which messages flows to which subscriber, as visible in the later section Filtering events. Kindly consider that although filters make things easier, we will always have to handle the observer life cycle. Sourcing events Sourcing events is the ability to obtain from a particular source where few useful events are usable in reactive programming. Reactive programming is all about event message handling. Any event is a specific occurrence of some kind of handleable behavior of users or external systems. We can actually program event reactions in the most pleasant and productive way for reaching our software goals. In the following example, we will see how to react to CLR events. In this specific case, we will handle filesystem events by using events from the System.IO.FileSystemWatcher class that gives us the ability to react to the filesystem's file changes without the need of making useless and resource-consuming polling queries against the file system status. Here's the observer and observable implementation: public sealed class NewFileSavedMessagePublisher : IObservable<string>, IDisposable { private readonly FileSystemWatcher watcher; public NewFileSavedMessagePublisher(string path) { //creates a new file system event router this.watcher = new FileSystemWatcher(path); //register for handling File Created event this.watcher.Created += OnFileCreated; //enable event routing this.watcher.EnableRaisingEvents = true; } //signal all observers a new file arrived private void OnFileCreated(object sender, FileSystemEventArgs e) { foreach (var observer in subscriberList) observer.OnNext(e.FullPath); } //the subscriber list private readonly List<IObserver<string>> subscriberList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { //register the new observer subscriberList.Add(observer); return null; } public void Dispose() { //disable file system event routing this.watcher.EnableRaisingEvents = false; //deregister from watcher event handler this.watcher.Created -= OnFileCreated; //dispose the watcher this.watcher.Dispose(); //signal all observers that job is done foreach (var observer in subscriberList) observer.OnCompleted(); } } /// <summary> /// A tremendously basic implementation /// </summary> public sealed class NewFileSavedMessageSubscriber : IObserver<string> { public void OnCompleted() { Console.WriteLine("-> END"); } public void OnError(Exception error) { Console.WriteLine("-> {0}", error.Message); } public void OnNext(string value) { Console.WriteLine("-> {0}", value); } } The observer interface simply gives us the ability to write text to the console. I think, there is nothing to say about it. On the other hand, the observable interface makes the most of the job in this implementation. The observable interface creates the watcher object and registers the right event handler to catch the wanted reactive events. It handles the life cycle of itself and the internal watcher object. Then, it correctly sends the OnComplete message to all the observers. Here's the program's initialization: static void Main(string[] args) { Console.WriteLine("Watching for new files"); using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) using (var subscriber = publisher.Subscribe(new NewFileSavedMessageSubscriber())) { Console.WriteLine("Press RETURN to exit"); //wait for user RETURN Console.ReadLine(); } } Any new file that arises in the folder will let route  full FileName to observer. This is the result of a copy and paste of the same file three times: -> [YOUR PATH]out - Copy.png-> [YOUR PATH]out - Copy (2).png-> [YOUR PATH]out - Copy (3).png By using a single observable interface and a single observer interface, the power of reactive programming is not so evident. Let's begin with writing some intermediate object to change the message flow within the pipeline of our message pump made in a reactive way with filters, message correlator, and dividers. Filtering events As said in the previous section, it is time to alter message flow. The observable interface has the task of producing messages, while observer at the opposite consumes such messages. To create a message filter, we need to create an object that is a publisher and subscriber altogether. The implementation must take into consideration the filtering need and the message routing to underlying observers that subscribes to the filter observable object instead of the main one. Here's an implementation of the filter: /// <summary> /// The filtering observable/observer /// </summary> public sealed class StringMessageFilter : IObservable<string>, IObserver<string>, IDisposable { private readonly string filter; public StringMessageFilter(string filter) { this.filter = filter; } //the observer collection private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { this.observerList.Add(observer); return null; } //a simple implementation //that disables message routing once //the OnCompleted has been invoked private bool hasCompleted = false; public void OnCompleted() { hasCompleted = true; foreach (var observer in observerList) observer.OnCompleted(); } //routes error messages until not completed public void OnError(Exception error) { if (!hasCompleted) foreach (var observer in observerList) observer.OnError(error); } //routes valid messages until not completed public void OnNext(string value) { Console.WriteLine("Filtering {0}", value); if (!hasCompleted && value.ToLowerInvariant().Contains(filter.ToLowerInvariant())) foreach (var observer in observerList) observer.OnNext(value); } public void Dispose() { OnCompleted(); } } This filter can be used together with the example from the previous section that routes the FileSystemWatcher events of created files. This is the new program initialization: static void Main(string[] args) { Console.WriteLine("Watching for new files"); using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) using (var filter = new StringMessageFilter(".txt")) { //subscribe the filter to publisher messages publisher.Subscribe(filter); //subscribe the console subscriber to the filter //instead that directly to the publisher filter.Subscribe(new NewFileSavedMessageSubscriber()); Console.WriteLine("Press RETURN to exit"); Console.ReadLine(); } } As visible, this new implementation creates a new filter object that takes parameter to verify valid filenames to flow to the underlying observers. The filter subscribes to the main observable object, while the observer subscribes to the filter itself. It is like a chain where each chain link refers to the near one. This is the output console of the running application: The filtering observer in action Although I made a copy of two files (a .png and a .txt file), we can see that only the text file reached the internal observer object, while the image file reached the OnNext of filter because the invalid against the filter argument never reached internal observer. Correlating events Sometimes, especially when dealing with integration scenarios, there is the need of correlating multiple events that not always came altogether. This is the case of a header file that came together with multiple body files. In reactive programming, correlating events means correlating multiple observable messages into a single message that is the result of two or more original messages. Such messages must be somehow correlated to a value (an ID, serial, or metadata) that defines that such initial messages belong to the same correlation set. Useful features in real-world correlators are the ability to specify a timeout (that may be infinite too) in the correlation waiting logic and the ability to specify a correlation message count (infinite too). Here's a correlator implementation made for the previous example based on the FileSystemWatcher class: public sealed class FileNameMessageCorrelator : IObservable<string>, IObserver<string>, IDisposable { private readonly Func<string, string> correlationKeyExtractor; public FileNameMessageCorrelator(Func<string, string> correlationKeyExtractor) { this.correlationKeyExtractor = correlationKeyExtractor; } //the observer collection private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { this.observerList.Add(observer); return null; } private bool hasCompleted = false; public void OnCompleted() { hasCompleted = true; foreach (var observer in observerList) observer.OnCompleted(); } //routes error messages until not completed public void OnError(Exception error) { if (!hasCompleted) foreach (var observer in observerList) observer.OnError(error); } Just a pause. Up to this row, we simply created the reactive structure of FileNameMessageCorrelator class by implementing the two main interfaces. Here is the core implementation that correlates messages: //the container of correlations able to contain //multiple strings per each key private readonly NameValueCollection correlations = new NameValueCollection(); //routes valid messages until not completed public void OnNext(string value) { if (hasCompleted) return; //check if subscriber has completed Console.WriteLine("Parsing message: {0}", value); //try extracting the correlation ID var correlationID = correlationKeyExtractor(value); //check if the correlation is available if (correlationID == null) return; //append the new file name to the correlation state correlations.Add(correlationID, value); //in this example we will consider always //correlations of two items if (correlations.GetValues(correlationID).Count() == 2) { //once the correlation is complete //read the two files and push the //two contents altogether to the //observers var fileData = correlations.GetValues(correlationID) //route messages to the ReadAllText method .Select(File.ReadAllText) //materialize the query .ToArray(); var newValue = string.Join("|", fileData); foreach (var observer in observerList) observer.OnNext(newValue); correlations.Remove(correlationID); } } This correlator class accepts a correlation function as a constructor parameter. This function is later used to evaluate correlationID when a new filename variable flows within the OnNext method. Once the function returns valid correlationID, such IDs will be used as key for NameValueCollection, a specialized string collection to store multiple values per key. When there are two values for the same key, correlation is ready to flow out to the underlying observers by reading file data and joining such data into a single string message. Here's the application's initialization: static void Main(string[] args) { using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) //creates a new correlator by specifying the correlation key //extraction function made with a Regular expression that //extract a file ID similar to FILEID0001 using (var correlator = new FileNameMessageCorrelator(ExtractCorrelationKey)) { //subscribe the correlator to publisher messages publisher.Subscribe(correlator); //subscribe the console subscriber to the correlator //instead that directly to the publisher correlator.Subscribe(new NewFileSavedMessageSubscriber()); //wait for user RETURN Console.ReadLine(); } } private static string ExtractCorrelationKey(string arg) { var match = Regex.Match(arg, "(FILEID\d{4})"); if (match.Success) return match.Captures[0].Value; else return null; } The initialization is quite the same of the filtering example seen in the previous section. The biggest difference is that the correlator object, instead of a string filter variable, accepts a function that analyses the incoming filename and produces the eventually available correlationID variable. I prepared two files with the same ID in filename variable. Here's the console output of the running example: Two files correlated by their name As visible, correlator made its job by joining the two file's data into a single message regardless of the order in which the two files were stored in the filesystem. These examples regarding the filtering and correlation of messages should give you the idea that we can do anything with received messages. We can put a message in standby until a correlated message comes, we can join multiple messages into one, we can produce multiple times the same message, and so on. This programming style opens the programmer's mind to lot of new application designs and possibilities. Sourcing from CLR streams Any class that extends System.IO.Stream is some kind of cursor-based flow of data. The same happens when we want to see a video stream, a sort of locally not persisted data that flows only in the network with the ability to go forward and backward, stop, pause, resume, play, and so on. The same behavior is available while streaming any kind of data, thus, the Stream class is the base class that exposes such behavior for any need. There are specialized classes that extend Stream, helping work with the streams of text data (StreamWriter and StreamReader), binary serialized data (BinaryReader and BinaryWriter), memory-based temporary byte containers (MemoryStream), network-based streams (NetworkStream), and lot of others. Regarding reactive programming, we are dealing with the ability to source events from any stream regardless of its kind (network, file, memory, and so on). Real-world applications that use reactive programming based on streams are cheats, remote binary listeners (socket programming), and any other unpredictable event-oriented applications. On the other hand, it is useless to read a huge file in reactive way, because there is simply nothing reactive in such cases. It is time to look at an example. Here's a complete example of a reactive application made for listening to a TPC port and route string messages (CR + LF divides multiple messages) to all the available observers. The program Main and the usual ConsoleObserver methods are omitted for better readability: public sealed class TcpListenerStringObservable : IObservable<string>, IDisposable { private readonly TcpListener listener; public TcpListenerStringObservable(int port, int backlogSize = 64) { //creates a new tcp listener on given port //with given backlog size listener = new TcpListener(IPAddress.Any, port); listener.Start(backlogSize); //start listening asynchronously listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } private void OnTcpClientConnected(Task<TcpClient> clientTask) { //if the task has not encountered errors if (clientTask.IsCompleted) //we will handle a single client connection per time //to handle multiple connections, simply put following //code into a Task using (var tcpClient = clientTask.Result) using (var stream = tcpClient.GetStream()) using (var reader = new StreamReader(stream)) while (tcpClient.Connected) { //read the message var line = reader.ReadLine(); //stop listening if nothing available if (string.IsNullOrEmpty(line)) break; else { //construct observer message adding client's remote endpoint address and port var msg = string.Format("{0}: {1}", tcpClient.Client.RemoteEndPoint, line); //route messages foreach (var observer in observerList) observer.OnNext(msg); } } //starts another client listener listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { //stop listener listener.Stop(); } } The preceding example shows how to create a reactive TCP listener that acts as observable of string messages. The observable method uses an internal TcpListener class that provides mid-level network services across an underlying Socket object. The example asks the listener to start listening and starts waiting for a client into another thread with the usage of a Task object. When a remote client becomes available, its communication with the internals of observable is guaranteed by the OnTcpClientConneted method that verifies the normal execution of Task. Then, it catches TcpClient from Task, reads the network stream, and appends StreamReader to such a network stream to start a reading feature. Once the message reading feature is complete, another Task starts repeating the procedure. Although, this design handles a backlog of pending connections, it makes available only a single client per time. To change such designs to handle multiple connections altogether, simply encapsulate the OnTcpClientConnected logic. Here's an example: private void OnTcpClientConnected(Task<TcpClient> clientTask) { //if the task has not encountered errors if (clientTask.IsCompleted) Task.Factory.StartNew(() => { using (var tcpClient = clientTask.Result) using (var stream = tcpClient.GetStream()) using (var reader = new StreamReader(stream)) while (tcpClient.Connected) { //read the message var line = reader.ReadLine(); //stop listening if nothing available if (string.IsNullOrEmpty(line)) break; else { //construct observer message adding client's remote endpoint address and port var msg = string.Format("{0}: {1}", tcpClient.Client.RemoteEndPoint, line); //route messages foreach (var observer in observerList) observer.OnNext(msg); } } }, TaskCreationOptions.PreferFairness); //starts another client listener listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } This is the output of the reactive application when it receives two different connections by using telnet as a client (C:>telnet localhost 8081). The program Main and the usual ConsoleObserver methods are omitted for better readability: The observable routing events from the telnet client As you can see, each client starts connecting to the listener by using a different remote port. This gives us the ability to differentiate multiple remote connections although they connect altogether. Sourcing from CLR enumerables Sourcing from a finite collection is something useless with regard to reactive programming. Differently, specific enumerable collections are perfect for reactive usages. These collections are the changeable collections that support collection change notifications by implementing the INotifyCollectionChanged(System.Collections.Specialized) interface like the ObservableCollection(System.Collections.ObjectModel) class and any infinite collection that supports the enumerator pattern with the usage of the yield keyword. Changeable collections The ObservableCollection<T> class gives us the ability to understand, in an event-based way, any change that occurs against the collection content. Kindly consider that changes regarding collection child properties are outside of the collection scope. This means that we are notified only for collection changes like the one produced from the Add or Remove methods. Changes within a single item does not produce an alteration of the collection size, thus, they are not notified at all. Here's a generic (nonreactive) example: static void Main(string[] args) { //the observable collection var collection = new ObservableCollection<string>(); //register a handler to catch collection changes collection.CollectionChanged += OnCollectionChanged; collection.Add("ciao"); collection.Add("hahahah"); collection.Insert(0, "new first line"); collection.RemoveAt(0); Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } private static void OnCollectionChanged(object sender, NotifyCollectionChangedEventArgs e) { var collection = sender as ObservableCollection<string>; if (e.NewStartingIndex >= 0) //adding new items Console.WriteLine("-> {0} {1}", e.Action, collection[e.NewStartingIndex]); else //removing items Console.WriteLine("-> {0} at {1}", e.Action, e.OldStartingIndex); } As visible, collection notifies all the adding operations, giving the ability to catch the new message. The Insert method signals an Add operation; although with the Insert method, we can specify the index and the value will be available within collection. Obviously, the parameter containing the index value (e.NewStartingIndex) contains the new index accordingly to the right operation. Differently, the Remove operation, although notifying the removed element index, cannot give us the ability to read the original message before the removal, because the event triggers after the remove operation has already occurred. In a real-world reactive application, the most interesting operation against ObservableCollection is the Add operation. Here's an example (console observer omitted for better readability): class Program { static void Main(string[] args) { //the observable collection var collection = new ObservableCollection<string>(); using (var observable = new NotifiableCollectionObservable(collection)) using (var observer = observable.Subscribe(new ConsoleStringObserver())) { collection.Add("ciao"); collection.Add("hahahah"); collection.Insert(0, "new first line"); collection.RemoveAt(0); Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } } public sealed class NotifiableCollectionObservable : IObservable<string>, IDisposable { private readonly ObservableCollection<string> collection; public NotifiableCollectionObservable(ObservableCollection<string> collection) { this.collection = collection; this.collection.CollectionChanged += collection_CollectionChanged; } private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { this.collection.CollectionChanged -= collection_CollectionChanged; foreach (var observer in observerList) observer.OnCompleted(); } } } The result is the same as the previous example about ObservableCollection without the reactive objects. The only difference is that observable routes only messages when the Action values add. The ObservableCollection signaling its content changes Infinite collections Our last example is regarding sourcing events from an infinite collection method. In C#, it is possible to implement the enumerator pattern by signaling each object to enumerate per time, thanks to the yield keyword. Here's an example: static void Main(string[] args) { foreach (var value in EnumerateValuesFromSomewhere()) Console.WriteLine(value); } static IEnumerable<string> EnumerateValuesFromSomewhere() { var random = new Random(DateTime.Now.GetHashCode()); while (true) //forever { //returns a random integer number as string yield return random.Next().ToString(); //some throttling time Thread.Sleep(100); } } This implementation is powerful because it never materializes all the values into the memory. It simply signals that a new object is available to the enumerator that the foreach structure internally uses by itself. The result is writing forever numbers onto the output console. Somehow, this behavior is useful for reactive usage, because it never creates a useless state like a temporary array, list, or generic collection. It simply signals new items available to the enumerable. Here's an example: public sealed class EnumerableObservable : IObservable<string>, IDisposable { private readonly IEnumerable<string> enumerable; public EnumerableObservable(IEnumerable<string> enumerable) { this.enumerable = enumerable; this.cancellationSource = new CancellationTokenSource(); this.cancellationToken = cancellationSource.Token; this.workerTask = Task.Factory.StartNew(() => { foreach (var value in this.enumerable) { //if task cancellation triggers, raise the proper exception //to stop task execution cancellationToken.ThrowIfCancellationRequested(); foreach (var observer in observerList) observer.OnNext(value); } }, this.cancellationToken); } //the cancellation token source for starting stopping //inner observable working thread private readonly CancellationTokenSource cancellationSource; //the cancellation flag private readonly CancellationToken cancellationToken; //the running task that runs the inner running thread private readonly Task workerTask; //the observer list private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { //trigger task cancellation //and wait for acknoledge if (!cancellationSource.IsCancellationRequested) { cancellationSource.Cancel(); while (!workerTask.IsCanceled) Thread.Sleep(100); } cancellationSource.Dispose(); workerTask.Dispose(); foreach (var observer in observerList) observer.OnCompleted(); } } This is the code of the program startup with the infinite enumerable generation: class Program { static void Main(string[] args) { //we create a variable containing the enumerable //this does not trigger item retrieval //so the enumerator does not begin flowing datas var enumerable = EnumerateValuesFromSomewhere(); using (var observable = new EnumerableObservable(enumerable)) using (var observer = observable.Subscribe(new ConsoleStringObserver())) { //wait for 2 seconds than exit Thread.Sleep(2000); } Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } static IEnumerable<string> EnumerateValuesFromSomewhere() { var random = new Random(DateTime.Now.GetHashCode()); while (true) //forever { //returns a random integer number as string yield return random.Next().ToString(); //some throttling time Thread.Sleep(100); } } } As against the last examples, here we have the usage of the Task class. The observable uses the enumerable within the asynchronous Task method to give the programmer the ability to stop the execution of the whole operation by simply exiting the using scope or by manually invoking the Dispose method. This example shows a tremendously powerful feature: the ability to yield values without having to source them from a concrete (finite) array or collection by simply implementing the enumerator pattern. Although few are used, the yield operator gives the ability to create complex applications simply by pushing messages between methods. The more methods we create that cross send messages to each other, the more complex business logics the application can handle. Consider the ability to catch all such messages with observables, and you have a little idea about how powerful reactive programming can be for a developer. Summary In this article, we had the opportunity to test the main features that any reactive application must implement: message sending, error sending, and completing acknowledgement. We focused on plain C# programming to give the first overview of how reactive classic designs can be applied to all main application needs, such as sourcing from streams, from user input, from changeable and infinite collection. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Domain-Driven Design [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 15073
article-image-responsive-applications-asynchronous-programming
Packt
13 Jul 2016
9 min read
Save for later

Responsive Applications with Asynchronous Programming

Packt
13 Jul 2016
9 min read
In this article by Dirk Strauss, author of the book C# Programming Cookbook, he sheds some light on how to handle events, exceptions and tasks in asynchronous programming, making your application responsive. (For more resources related to this topic, see here.) Handling tasks in asynchronous programming Task-Based Asynchronous Pattern (TAP) is now the recommended method to create asynchronous code. It executes asynchronously on a thread from the thread pool and does not execute synchronously on the main thread of your application. It allows us to check the task's state by calling the Status property. Getting ready We will create a task to read a very large text file. This will be accomplished using an asynchronous Task. How to do it… Create a large text file (we called ours taskFile.txt) and place it in your C:temp folder: In the AsyncDemo class, create a method called ReadBigFile() that returns a Task<TResult> type, which will be used to return an integer of bytes read from our big text file: public Task<int> ReadBigFile() { } Add the following code to open and read the file bytes. You will see that we are using the ReadAsync() method that asynchronously reads a sequence of bytes from the stream and advances the position in that stream by the number of bytes read from that stream. You will also notice that we are using a buffer to read those bytes. public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); return readBytes; } Exceptions you can expect to handle from the ReadAsync() method are ArgumentNullException, ArgumentOutOfRangeException, ArgumentException, NotSupportedException, ObjectDisposedException and InvalidOperatorException. Finally, add the final section of code just after the var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); line that uses a lambda expression to specify the work that the task needs to perform. In this case, it is to read the bytes in the file: public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); readBytes.ContinueWith(task => { if (task.Status == TaskStatus.Running) Console.WriteLine("Running"); else if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Faulted"); bigFile.Dispose(); }); return readBytes; } If not done so in the previous section, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Now, make sure that you add code to call the AsyncDemo class's ReadBigFile() method asynchronously. Remember to read the result from the method (which are the bytes read) into an integer variable: private async void button1_Click(object sender, EventArgs e) { Console.WriteLine("Start file read"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadBigFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. Clicking on the button1 button will display the outputs to our Output window. Throughout this code execution, the form remains responsive. Take note though that the information displayed in your Output window will differ from the screenshot. This is because the file you used is different from mine. How it works… The task is executed on a separate thread from the thread pool. This allows the application to remain responsive while the large file is being processed. Tasks can be used in multiple ways to improve your code. This recipe is but one example. Exception handling in asynchronous programming Exception handling in asynchronous programming has always been a challenge. This was especially true in the catch blocks. As of C# 6, you are now allowed to write asynchronous code inside the catch and finally block of your exception handlers. Getting ready The application will simulate the action of reading a logfile. Assume that a third-party system always makes a backup of the logfile before processing it in another application. While this processing is happening, the logfile is deleted and recreated. Our application, however, needs to read this logfile on a periodic basis. We, therefore, need to be prepared for the case where the file does not exist in the location we expect it in. Therefore, we will purposely omit the main logfile, so that we can force an error. How to do it… Create a text file and two folders to contain the logfiles. We will, however, only create a single logfile in the BackupLog folder. The MainLog folder will remain empty: In our AsyncDemo class, write a method to read the main logfile in the MainLog folder: private async Task<int> ReadMainLog() { var bigFile = " File.OpenRead(@"C:tempLogMainLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Main Log RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Main Log Faulted"); bigFile.Dispose(); }); return await readBytes; } Create a second method to read the backup file in the BackupLog folder: private async Task<int> ReadBackupLog() { var bigFile = " File.OpenRead(@"C:tempLogBackupLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Backup Log " RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Backup Log Faulted"); bigFile.Dispose(); }); return await readBytes; } In actual fact, we would probably only create a single method to read the logfiles, passing only the path as a parameter. In a production application, creating a class and overriding a method to read the different logfile locations would be a better approach. For the purposes of this recipe, however, we specifically wanted to create two separate methods so that the different calls to the asynchronous methods are clearly visible in the code. We will then create a main ReadLogFile() method that tries to read the main logfile. As we have not created the logfile in the MainLog folder, the code will throw a FileNotFoundException. It will then run the asynchronous method and await that in the catch block of the ReadLogFile() method (something which was impossible in the previous versions of C#), returning the bytes read to the calling code: public async Task<int> ReadLogFile() { int returnBytes = -1; try { Task<int> intBytesRead = ReadMainLog(); returnBytes = await ReadMainLog(); } catch (Exception ex) { try { returnBytes = await ReadBackupLog(); } catch (Exception) { throw; } } return returnBytes; } If not done so in the previous recipe, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click on the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning an asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Next, we will write the code to create a new instance of the AsyncDemo class and attempt to read the main logfile. In a real-world example, it is at this point that the code does not know that the main logfile does not exist: private async void button1_Click(object sender, EventArgs "e) { Console.WriteLine("Read backup file"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadLogFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + O to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. To simulate a file not found exception, we deleted the file from the MainLog folder. You will see that the exception is thrown, and the catch block runs the code to read the backup logfile instead: How it works… The fact that we can await in catch and finally blocks allows developers much more flexibility because asynchronous results can consistently be awaited throughout the application. As you can see from the code we wrote, as soon as the exception was thrown, we asynchronously read the file read method for the backup file. Summary In this article we looked at how TAP is now the recommended method to create asynchronous code. How tasks can be used in multiple ways to improve your code. This allows the application to remain responsive while the large file is being processed also how exception handling in asynchronous programming has always been a challenge and how to use catch and finally block to handle exceptions. Resources for Article: Further resources on this subject: Functional Programming in C#[article] Creating a sample C#.NET application[article] Creating a sample C#.NET application[article]
Read more
  • 0
  • 0
  • 3237

article-image-spatial-analysis
Packt
07 Jul 2016
21 min read
Save for later

Spatial Analysis

Packt
07 Jul 2016
21 min read
In this article by Ron Vincent author of the book Learning ArcGIS Runtime SDK for .NET, we're going to learn about spatial analysis with ArcGIS Runtime. As with other parts of ArcGIS Runtime, we really need to understand how spatial analysis is set up and executed with ArcGIS Desktop/Pro and ArcGIS Server. As a result, we will first learn about spatial analysis within the context of geoprocessing. Geoprocessing is the workhorse of doing spatial analysis with Esri's technology. Geoprocessing is very similar to how you write code; in that you specify some input data, do some work on that input data, and then produce the desired output. The big difference is that you use tools that come with ArcGIS Desktop or Pro. In this article, we're going to learn how to use these tools, and how to specify their input, output, and other parameters from an ArcGIS Runtime app that goes well beyond what's available in the GeometryEngine tool. In summary, we're going to cover the following topics: Introduction to spatial analysis Introduction to geoprocessing Preparing for geoprocessing Using geoprocessing in runtime Online geoprocessing (For more resources related to this topic, see here.) Introducing spatial analysis Spatial analysis is a broad term that can mean many different things, depending on the kind of study to be undertaken, the tools to be used, and the methods of performing the analysis, and is even subject to the dynamics of the individuals involved in the analysis. In this section, we will look broadly at the kinds of analysis that are possible, so that you have some context as to what is possible with the ArcGIS platform. Spatial analysis can be divided into these five broad categories: Point patterns Surface analysis Areal data Interactivity Networks Point pattern analysis is the evaluation of the pattern or distribution of points in space. With ArcGIS, you can analyze point data using average nearest neighbor, central feature, mean center, and so on. For surface analysis, you can create surface models, and then analyze them using tools such as LOS, slope surfaces, viewsheds, and contours. With areal data (polygons), you can perform hotspot analysis, spatial autocorrelation, grouping analysis, and so on. When it comes to modeling interactivity, you can use tools in ArcGIS that allow you to do gravity modeling, location-allocation, and so on. Lastly, with Esri's technology you can analyze networks, such as finding the shortest path, generating drive-time polygons, origin-destination matrices, and many other examples. ArcGIS provides the ability to perform all of these kinds of analysis using a variety of tools. For example, here the areas in green are visible from the tallest building. Areas in red are not visible: This article will deal with what is important to understand is that the ArcGIS platform has the capability to help solve problems such as these: An epidemiologist collects data on a disease, such as Chronic Obstructive Pulmonary Disease (COPD), and wants to know where it occurs and whether there are any statistically significant clusters so that a mitigation plan can be developed A mining geologist wants to obtain samples of a precious mineral so that he/she can estimate the overall concentration of the mineral A military analyst or soldier wants to know where they can be located in the battlefield and not been seen A crime analyst wants to know where crimes are concentrated so that they can increase police presence as a deterrent A research scientist wants to develop a model to predict the path of a fire There are many more examples. With ArcGIS Desktop and Pro, along with the correct extension, questions can be posed and answered using a variety of techniques. However, it's important to understand that ArcGIS Runtime may or may not be a good fit and may or may not support certain tools. In many cases, spatial analysis would be best studied with ArcGIS Desktop or Pro. For example, if you plan to conduct hotspot analysis on patients or crime, doing this kind of operation with Desktop or Pro is best suited because it's typically something you do once. On the other hand, if you plan to allow users to repeat this process again and again with different data, and you need high performance, building a tool with ArcGIS Runtime will be the perfect solution, especially if they need to run the tool in the field. It should also be noted that, in some cases, the ArcGIS JavaScript API will also be better suited. Introducing geoprocessing If you open up the Geoprocessing toolbox in ArcGIS Desktop or Pro, you will find dozens of tools categorized in the following manner: With these tools, you can build sophisticated models by using ModelBuilder or Python, and then publish them to ArcGIS Server. For example, to perform a buffer with the GeometryEngine tool, you would drag the Buffer tool onto the ModelBuilder canvas, as shown here, and specify its inputs and outputs: This model specifies an input (US cities), performs an operation (Buffer the cities), and then produces an output (Buffered cities). Conceptually, this is programming except that the algorithm is built graphically instead of with code. You may be asking: Why would you use this tool in ArcGIS Desktop or Pro? Good question. Well, ArcGIS Runtime only comes with a few selected tools in GeometryEngine. These tools, such as the buffer method in GeometryEngine, are so common that Esri decided to include them with ArcGIS Runtime so that these kinds of operation could be performed on the client without having to call the server. On the other hand, in order to keep the core of ArcGIS Runtime lightweight, Esri wanted to provide these tools and many more, but make them available as tools that you need to call on when required for special or advanced analysis. As a result, if your app needs basic operations, GeometryEngine may provide what you need. On the other hand, if you need to perform more sophisticated operations, you will need to build the model with Desktop or Pro, published it to Server, and then consume the resulting service with ArcGIS Runtime. The rest of this article will show you how to consume a geoprocessing model using this pattern. Preparing for geoprocessing To perform geoprocessing, you will need to create a model with ModelBuilder and/or Python. For more details on how to create models using ModelBuilder, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/modelbuilder/what-is-modelbuilder-.htm. To build a model with Python, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/basics/python-and-geoprocessing.htm. Once you've created a model with ModelBuilder or Python, you will then need to run the tool to ensure that it works and to make it so that it can be published as a geoprocessing service for online use, or as a geoprocessing package for offline use. See here for publishing a service: http://server.arcgis.com/en/server/latest/publish-services/windows/a-quick-tour-of-publishing-a-geoprocessing-service.htm If you plan to use geoprocessing offline, you'll need to publish a geoprocessing package (*.gpk) file. You can learn more about these at https://desktop.arcgis.com/en/desktop/latest/analyze/sharing-workflows/a-quick-tour-of-geoprocessing-packages.htm. Once you have a geoprocessing service or package, you can now consume it with ArcGIS Runtime. In the sections that follow, we will use classes from Esri.ArcGISRuntime.Tasks.Geoprocessing that allow us to consume these geoprocessing services or packages. Online geoprocessing with ArcGIS Runtime Once you have created a geoprocessing model, you will want to access it from ArcGIS Runtime. In this section, we're going to do surface analysis from an online service that Esri has published. To accomplish this, you will need to access the REST endpoint by typing in the following URL: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer When you open this page, you'll notice the description and that it has a list of Tasks: A task is a REST child resource of a geoprocessing service. A geoprocessing service can have one or more tasks associated with it. A task requires a set of inputs in the form of parameters. Once the task completes, it will produce some output that you will then use in your app. The output could be a map service, a single value, or even a report. This particular service only has one task associated with it and it is called Viewshed. If you click on the task called Viewshed, you'll be taken to this page: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed. This service will produce a viewshed of where the user clicks that looks something like this: The user clicks on the map (X) and the geoprocessing task produces a viewshed, which shows all the areas on the surface that are visible to an observer, as if they were standing on the surface. Once you click on the task, you'll note the concepts marked in the following screenshot: As you can see, beside the red arrows, the geoprocessing service lets you know what is required for it to operate, so let's go over each of these: First, the service lets you know that it is a synchronous geoprocessing service. A synchronous geoprocessing task will run synchronously until it has completed, and block the calling thread. An asynchronous geoprocessing task will run asynchronously, but it won't block the calling thread. The next pieces of information you'll need to provide to the task are the parameters. In the preceding example, the task requires Input_Observation_Point. You will need to provide this exact name when providing the parameter later on, when we write the code to pass in this parameter. Also, note that the Direction value is esriGPParameterDirectionInput. This tells you that the task expects that Input_Observation_Point is an input to the model. Lastly, note that the Parameter Type value is Required. In other words, you must provide the task with this parameter in order for it to run. It's also worth noting that Default Value is an esriGeometryPoint type, which in ArcGIS Runtime is MapPoint. The Spatial Reference value of the point is 540003. If you investigate the remaining required parameters, you'll note that they require a Viewshed_Distance parameter. Now, refer to the following screenshot. If you don't specify a value, it will use Default Value of 15,000 meters. Lastly, this task will output a Viewshed_Result parameter, which is esriGeometryPolygon. Using this polygon, we can then render to the map or scene. Geoprocessing synchronously Now that you've seen an online service, let's look at how we call this service using ArcGIS Runtime. To execute the preceding viewshed task, we first need to create an instance of the geoprocessor object. The geoprocessor object requires a URL down to the task level in the REST endpoint, like this: private const string viewshedServiceUrl = "http://sampleserver6.arcgisonline.com/arcgis/rest/services/ Elevation/ESRI_Elevation_World/GPServer/Viewshed"; private Geoprocessor gpTask; Note that we've attached /Viewshed on the end of the original URL so that we can pass in the completed path to the task. Next, you will then instantiate the geoprocessor in your app, using the URL to the task: gpTask = new Geoprocessor(new Uri(viewshedServiceUrl)); Once we have created the geoprocessor, we can then prompt the user to click somewhere on the map. Let's look at some code: public async void CreateViewshed() { // // get a point from the user var mapPoint = await this.mapView.Editor.RequestPointAsync(); // clear the graphics layers this.viewshedGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add(new Graphic{ Geometry = mapPoint, Symbol = this.sms }); // specify the input parameters var parameter = new GPInputParameter() { OutSpatialReference = SpatialReferences.WebMercator }; parameter.GPParameters.Add(new GPFeatureRecordSetLayer("Input_Observation_Point", mapPoint)); parameter.GPParameters.Add(new GPLinearUnit("Viewshed_Distance", LinearUnits.Miles, this.distance)); // Send to the server this.Status = "Processing on server..."; var result = await gpTask.ExecuteAsync(parameter); if (result == null || result.OutParameters == null || !(result.OutParameters[0] is GPFeatureRecordSetLayer)) throw new ApplicationException("No viewshed graphics returned for this start point."); // process the output this.Status = "Finished processing. Retrieving results..."; var viewshedLayer = result.OutParameters[0] as GPFeatureRecordSetLayer; var features = viewshedLayer.FeatureSet.Features; foreach (Feature feature in features) { this.viewshedGraphicsLayer.Graphics.Add(feature as Graphic); } this.Status = "Finished!!"; } The first thing we do is have the user click on the map and return MapPoint. We then clear a couple of GraphicsLayers that hold the input graphic and viewshed graphics, so that the map is cleared every time they run this code. Next, we create a graphic using the location where the user clicked. Now comes the interesting part of this. We need to provide the input parameters for the task and we do that with GPInputParameter. When we instantiate GPInputParameter, we also need to specify the output spatial reference so that the data is rendered in the spatial reference of the map. In this example, we're using the map's spatial reference. Then, we add the input parameters. Note that we've spelled them exactly as the task required them. If we don't, the task won't work. We also learned earlier that this task requires a distance, so we use GPLinearUnit in Miles. The GPLinearUnit class lets the geoprocessor know what kinds of unit to accept. After the input parameters are set up, we then call ExecuteAsync. We are calling this method because this is a synchronous geoprocessing task. Even though this method has Async on the end of it, this applies to .NET, not ArcGIS Server. The alternative to ExecuteAsync is SubmitJob, which we will discuss shortly. After some time, the result comes back and we grab the results using result.OutParameters[0]. This contains the output from the geoprocessing task and we want to use that to then render the output to the map. Thankfully, it returns a read-only set of polygons, which we can then add to GraphicsLayer. If you don't know which parameter to use, you'll need to look it up on the task's page. In the preceding example, the parameter was called Viewshed_Distance and the Data Type value was GPLinearUnit. ArcGIS Runtime comes with a variety of data types to match the corresponding data type on the server. The other supported types are GPBoolean, GPDataFile, GPDate, GPDouble, GPItemID, GPLinearUnit, GPLong, GPMultiValue<T>, GPRasterData, GPRecordSet, and GPString. Instead of manually inspecting a task as we did earlier, you can also use Geoprocessor.GetTaskInfoAsync to discover all of the parameters. This is a useful object if you want to provide your users with the ability to specify any geoprocessing task dynamically while the app is running. For example, if your app requires that users are able to enter any geoprocessing task, you'll need to inspect that task, obtain the parameters, and then respond dynamically to the entered geoprocessing task. Geoprocessing asynchronously So far we've called a geoprocessing task synchronously. In this section, we'll cover how to call a geoprocessing task asynchronously. There are two differences when calling a geoprocessing task asynchronously: You will run the task by executing a method called SubmitJobAsync instead of ExecuteAsync. The SubmitJobAsync method is ideal for long-running tasks, such as performing data processing on the server. The major advantage of SubmitJobAsync is that users can continue working while the task works in the background. When the task is completed, the results will be presented. You will need to check the status of the task with GPJobStatus so that users can get a sense of whether the task is working as expected. To do this, check GPJobStatus periodically and it will return GPJobStatus. The GPJobStatus enumeration has the following values: New, Submitted, Waiting, Executing, Succeeded, Failed, TimedOut, Cancelling, Cancelled, Deleting, or Deleted. With these enumerations, you can poll the server and return the status using CheckJobStatusAsync on the task and present that to the user while they wait for the geoprocessor. Let's take a look at this process in the following diagram: As you can see in the preceding diagram, the input parameters are specified as we did earlier with the synchronous task, the Geoprocessor object is set up, and then SubmitJobAsync is called with the parameters (GOInputParameter). Once the task begins, we then have to check the status of it using the results from SubmitJobAsync. We then use CheckJobStatusAsync on the task to return the status enumeration. If it indicates Succeeded, we then do something with the results. If not, we continue to check the status using any time period we specify. Let's try this out using an example service from Esri that allows for areal analysis. Go to the following REST endpoint: http://serverapps10.esri.com/ArcGIS/rest/services/SamplesNET/USA_Data_ClipTools/GPServer/ClipCounties. In the service, you will note that it's called ClipCounties. This is a rather contrived example, but it shows how to do server-side data processing. It requires two parameters called Input_Features and Linear_unit. It outputs output_zip and Clipped _Counties. Basically, this task allows you to drag a line on the map; it will then buffer it and clip out the counties in the U.S. and show them on the map, like so: We are interested in two methods in this sample app. Let's take a look at them: public async void Clip() { //get the user's input line var inputLine = await this.mapView.Editor.RequestShapeAsync( DrawShape.Polyline) as Polyline; // clear the graphics layers this.resultGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add( new Graphic { Geometry = inputLine, Symbol = this.simpleInputLineSymbol }); // add the parameters var parameter = new GPInputParameter(); parameter.GPParameters.Add( new GPFeatureRecordSetLayer("Input_Features", inputLine)); parameter.GPParameters.Add(new GPLinearUnit( "Linear_unit", LinearUnits.Miles, this.Distance)); // poll the task var result = await SubmitAndPollStatusAsync(parameter); // add successful results to the map if (result.JobStatus == GPJobStatus.Succeeded) { this.Status = "Finished processing. Retrieving results..."; var resultData = await gpTask.GetResultDataAsync(result.JobID, "Clipped_Counties"); if (resultData is GPFeatureRecordSetLayer) { GPFeatureRecordSetLayer gpLayer = resultData as GPFeatureRecordSetLayer; if (gpLayer.FeatureSet.Features.Count == 0) { // the the map service results var resultImageLayer = await gpTask.GetResultImageLayerAsync( result.JobID, "Clipped_Counties"); // make the result image layer opaque GPResultImageLayer gpImageLayer = resultImageLayer; gpImageLayer.Opacity = 0.5; this.mapView.Map.Layers.Add(gpImageLayer); this.Status = "Greater than 500 features returned. Results drawn using map service."; return; } // get the result features and add them to the // GraphicsLayer var features = gpLayer.FeatureSet.Features; foreach (Feature feature in features) { this.resultGraphicsLayer.Graphics.Add( feature as Graphic); } } this.Status = "Success!!!"; } } This Clip method first asks the user to add a polyline to the map. It then clears the GraphicsLayer class, adds the input line to the map in red, sets up GPInputParameter with the required parameters (Input_Featurs and Linear_unit), and calls a method named SubmitAndPollStatusAsync using the input parameters. Let's take a look at that method too: // Submit GP Job and Poll the server for results every 2 seconds. private async Task<GPJobInfo> SubmitAndPollStatusAsync(GPInputParameter parameter) { // Submit gp service job var result = await gpTask.SubmitJobAsync(parameter); // Poll for the results async while (result.JobStatus != GPJobStatus.Cancelled && result.JobStatus != GPJobStatus.Deleted && result.JobStatus != GPJobStatus.Succeeded && result.JobStatus != GPJobStatus.TimedOut) { result = await gpTask.CheckJobStatusAsync(result.JobID); foreach (GPMessage msg in result.Messages) { this.Status = string.Join(Environment.NewLine, msg.Description); } await Task.Delay(2000); } return result; } The SubmitAndPollStatusAsync method submits the geoprocessing task and the polls it every two seconds to see if it hasn't been Cancelled, Deleted, Succeeded, or TimedOut. It calls CheckJobStatusAsync, gets the messages of type GPMessage, and adds them to the property called Status, which is a ViewModel property with the current status of the task. We then effectively check the status of the task every 2 seconds with Task.Delay(2000) and continue doing this until something happens other than the GPJobStatus enumerations we're checking for. Once SubmitAndPollStatusAsync has succeeded, we then return to the main method (Clip) and perform the following steps with the results: We obtain the results with GetResultDataAsync by passing in the results of JobID and Clipped_Counties. The Clipped_Counties instance is an output of the task, so we just need to specify the name Clipped_Counties. Using the resulting data, we first check whether it is a GPFeatureRecordSetLayer type. If it is, we then do some more processing on the results. We then do a cast just to make sure we have the right object (GPFeatureRecordsSetLayer). We then check to see if no features were returned from the task. If none were returned, we perform the following steps: We obtain the resulting image layer using GetResultImageLayerAsync. This returns a map service image of the results. We then cast this to GPResultImageLayer and set its opacity to 0.5 so that we can see through it. If the user enters in a large distance, a lot of counties are returned, so we convert the layer to a map image, and then show them the entire country so that they can see what they've done wrong. Having the result as an image is faster than displaying all of the polygons as the JSON objects. Add GPResultImageLayer to the map. If everything worked according to plan, we get only the features needed and add them to GraphicsLayer. That was a lot of work, but it's pretty awesome that we sent this off to ArcGIS Server and it did some heavy processing for us so that we could continue working with our map. The geoprocessing task took in a user-specified line, buffered it, and then clipped out the counties in the U.S. that intersected with that buffer. When you run the project, make sure you pan or zoom around while the task is running so that you can see that you can still work. You could also further enhance this code to zoom to the results when it finishes. There are some other pretty interesting capabilities that we need to discuss with this code, so let's delve a little deeper. Working with the output results Let's discuss the output of the geoprocessing results in a little more detail in this section. GPMesssage The GPMessage object is very helpful because it can be used to check the types of message that are coming back from Server. It contains different kinds of message via an enumeration called GPMessageType, which you can use to further process the message. GPMessageType returns an enumeration of Informative, Warning, Error, Abort, and Empty. For example, if the task failed, GPMessageType.Error will be returned and you can present a message to the user letting them know what happened and what they can do to resolve this issue. The GPMessage object also returns Description, which we used in the preceding code to display to the user as the task executed. The level of messages returned by Server dictates what messages are returned by the task. See Message Level here: If the Message Level field is set to None, no messages will be returned. When testing a geoprocessing service, it can be helpful to set the service to Info because it produces detailed messages. GPFeatureRecordSetLayer The preceding task expected an output of features, so we cast the result to GPFeatureRecordsSetLayer. The GPFeatureRecordsSetLayer object is a layer type which handles the JSON objects returned by the server, which we can then use to render on the map. GPResultMapServiceLayer When a geoprocessing service is created, you have the option of making it produce an output map service result with its own symbology. Refer to http://server.arcgis.com/en/server/latest/publish-services/windows/defining-output-symbology-for-geoprocessing-tasks.htm. You can take the results of a GPFeatureRecordsSetLayer object and access this map service using the following URL format: http://catalog-url/resultMapServiceName/MapServer/jobs/jobid Using JobID, which was produced by SubmitJobAsync, you can add the result to the map like so: ArcGISDynamicMapServiceLayer dynLayer = this.gpTask.GetResultMapServiceLayer(result.JobID); this.mapView.Map.Layers.Add(dynLayer); Summary In this article, we went over spatial analysis at a high level, and then went into the details of how to do spatial analysis with ArcGIS Runtime. We discussed how to create models with ModelBuilder and/or Python, and then went on to show how to use geoprocessing, both synchronously and asynchronously, with online and offline tasks. With this information, you now have a multitude of options for adding a wide variety of analytical tools to your apps. Resources for Article: Further resources on this subject: Building Custom Widgets [article] Learning to Create and Edit Data in ArcGIS [article] ArcGIS – Advanced ArcObjects [article]
Read more
  • 0
  • 0
  • 2167

article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 6605
article-image-functional-programming-c
Packt
07 Jul 2016
4 min read
Save for later

Functional Programming in C#

Packt
07 Jul 2016
4 min read
In this article, we are going to explore the following topics: The introductory of functional programming concept Comparison between functional and imperative approach (For more resources related to this topic, see here.) Introduction to functional programming In functional programming, we use mathematic approach to construct our code. The function we've got in the code has similarity with mathematical function we usually use in our daily basis. The variable in the code function represents the value of the function parameter and it similar to the mathematical function. The idea is a programmer defines the functions, which contain the expression, definition, and also parameters which can be expressed by variable in order to solve the problems. After a programmer builds the function and sends computer the function, it's now computer's turn to do its job. In general, the role of computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyse the expression from the function and yield the result to the user in printed format. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result just after completely evaluates it. However, it is just the trivial example of the acting of computer in evaluating the expression. In fact, the programmer can increase the ability of the computer by making the complex definition and expression inside the function. Not only can the computer evaluate the trivial expression, but it also can evaluate complex calculation and expression. Comparison to imperative programming The main difference between functional and imperative programming is the existence of side effect. In functional programming, since it applies pure function concept, the side effect is avoided. It's different with imperative programming which has to access I/O and modify state outside the function which will produce side effect. In addition, with an imperative approach, the programmer focuses on the way of performing the task and tracking changes in state while a programmer focuses on the kind of desired information and the kind of required transformation in functional approach. The change of states becomes important in imperative programming while no change of states exist in functional programming. The order of execution is also important in imperative function but not really important in functional programming since we need to concern more on constructing the problem as a set of functions to be execute rather than the detail step of the flow. We will continue our discussion about functional and imperative approach by creating some code in the next topics. Summary We have been acquainted with functional approach so far by discussing the introduction of functional programming. We also have compared the functional approach with mathematical concept in function. It's now clear that functional approach uses the mathematical approach to compose functional program. The comparison between functional and imperative programming has also give us the important point to distinguish the two. It's now clear that in functional programming the programmer focuses on the kind of desired information and the kind of required transformation while in imperative approach the programmer focuses on the way of performing the task and tracking changes in state. For more information on C#, visit the following books: C# 5 First Look (https://www.packtpub.com/application-development/c-5-first-look) C# Multithreaded and Parallel Programming (https://www.packtpub.com/application-development/c-multithreaded-and-parallel-programming) C# 6 and .NET Core 1.0: Modern Cross-Platform Development (https://www.packtpub.com/application-development/c-6-and-net-core-10) Resources for Article: Further resources on this subject: Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article] C# Language Support for Asynchrony[article] C# with NGUI[article]
Read more
  • 0
  • 0
  • 4844

article-image-capability-model-microservices
Packt
17 Jun 2016
19 min read
Save for later

A capability model for microservices

Packt
17 Jun 2016
19 min read
In this article by Rajesh RV, the author of Spring Microservices, you will learn aboutthe concepts of microservices. More than sticking to definitions, it is better to understand microservices by examining some common characteristics of microservices that are seen across many successful microservices implementations. Spring Boot is an ideal framework to implement microservices. In this article, we will examine how to implement microservices using Spring Boot with an example use case. Beyond services, we will have to be aware of the challenges around microservices implementation. This article will also talk about some of the common challenges around microservices. A successful microservices implementation has to have some set of common capabilities. In this article, we will establish a microservices capability model that can be used in a technology-neutral framework to implement large-scale microservices. What are microservices? Microservices is an architecture style used by many organizations today as a game changer to achieve a high degree of agility, speed of delivery, and scale. Microservices give us a way to develop more physically separated modular applications. Microservices are not invented. Many organizations, such as Netflix, Amazon, and eBay, successfully used the divide-and-conquer technique to functionally partition their monolithic applications into smaller atomic units, and each performs a single function. These organizations solved a number of prevailing issues they experienced with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern microservices architecture. Microservices originated from the idea of Hexagonal Architecture coined by Alister Cockburn. Hexagonal Architecture is also known as thePorts and Adapters pattern. Microservices is an architectural style or an approach to building IT systems as aset of business capabilities that are autonomous, self-contained, and loosely coupled. The preceding diagram depicts a traditional N-tier application architecture having a presentation layer, business layer, and database layer. Modules A, B, and C represents three different business capabilities. The layers in the diagram represent a separation of architecture concerns. Each layer holds all three business capabilities pertaining to this layer. The presentation layer has the web components of all three modules, the business layer has the business components of all the three modules, and the database layer hosts tables of all the three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservices-based architecture, as follows: As we can note in the diagram, the boundaries are inverted in the microservices architecture. Each vertical slice represents a microservice. Each microservice has its own presentation layer, business layer, and database layer. Microservices are aligned toward business capabilities. By doing so, changes to one microservice do not impact others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols such as HTTP and REST or messaging protocols such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices are more aligned to the business capabilities and have independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two other facets of microservices. Microservices are self-contained, independently deployable, and autonomous services that take full responsibility of a business capability and its execution. They bundle all dependencies, including library dependencies and execution environments such as web servers and containers or virtual machines that abstract physical resources. These self-contained services assume single responsibility and are well enclosed with in a bounded context. Microservices – The honeycomb analogy The honeycomb is an ideal analogy to represent the evolutionary microservices architecture. In the real world, bees build a honeycomb by aligning hexagonal wax cells. They start small, using different materials to build the cells. Construction is based on what is available at the time of building. Repetitive cells form a pattern and result in a strong fabric structure. Each cell in the honeycomb is independent but also integrated with other cells. By adding new cells, the honeycomb grows organically to a big solid structure. The content inside each cell is abstracted and is not visible outside. Damage to one cell does not damage other cells, and bees can reconstruct these cells without impacting the overall honeycomb. Characteristics of microservices The microservices definition discussed at the beginning of this article is arbitrary. Evangelists and practitioners have strong but sometimes different opinions on microservices. There is no single, concrete, and universally accepted definition for microservices. However, all successful microservices implementations exhibit a number of common characteristics. Some of these characteristics are explained as follows: Since microservices are more or less similar to a flavor of SOA, many of the service characteristics of SOA are applicable to microservices, as well. In the microservices world, services are first-class citizens. Microservices expose service endpoints as APIs and abstract all their realization details. The APIs could be synchronous or asynchronous. HTTP/REST is the popular choice for APIs. As microservices are autonomous and abstract everything behind service APIs, it is possible to have different architectures for different microservices. The internal implementation logic, architecture, and technologies, including programming language, database, quality of service mechanisms,and so on, are completely hidden behind the service API. Well-designed microservices are aligned to a single business capability, so they perform only one function. As a result, one of the common characteristics we see in most of the implementations are microservices with smaller footprints. Most of the microservices implementations are automated to the maximum extent possible, from development to production. Most large-scale microservices implementations have a supporting ecosystem in place. The ecosystem's capabilities include DevOps processes, centralized log management, service registry, API gateways, extensive monitoring, service routing and flow control mechanisms, and so on. Successful microservices implementations encapsulate logic and data within the service. This results in two unconventional situations: a distributed data and logic and decentralized governance. A microservice example The Customer profile microservice exampleexplained here demonstrates the implementation of microservice and interaction between different microservices. In this example, two microservices, Customer Profile and Customer Notification, will be developed. As shown in the diagram, the Customer Profile microservice exposes methods to create, read, update, and delete a customer and a registration service to register a customer. The registration process applies a certain business logic, saves the customer profile, and sends a message to the CustomerNotification microservice. The CustomerNotification microservice accepts the message send by the registration service and sends an e-mail message to the customer using an SMTP server. Asynchronous messaging is used to integrate CustomerProfile with the CustomerNotification service. The customer microservices class domain model diagram is as shown here: Implementing this Customer Profile microservice is not a big deal. The Spring framework, together with Spring Boot, provides all the necessary capabilities to implement this microservice without much hassle. The key is CustomerController in the diagram, which exposes the REST endpoint for our microservice. It is also possible to use HATEOAS to explore the repository's REST services directly using the @RepositoryRestResource annotation. The following code sample shows the Spring Boot main class called Application and theREST endpoint definition for theregistration of a new customer: @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args);  } }   @RestController class CustomerController{ //other code here @RequestMapping( path="/register", method = RequestMethod.POST) Customer register(@RequestBody Customer customer){ returncustomerComponent.register(customer); } } CustomerControllerinvokes a component class, CustomerComponent. The component class/bean handles all the business logic. CustomerRepository is a Spring data JPA repository defined to handlethe persistence of the Customer entity. The whole application will then be deployed as a Spring Boot application by building a standalone jar rather than using the conventional war file. Spring Boot encapsulates the server runtime along with the fat jar it produces. By default, it is an instance of the Tomcat server. CustomerComponent, in addition to calling the CustomerRepository class, sends a message to the RabbitMQ queue, where the CustomerNotification component is listening. This can be easily achieved in Spring using the RabbitMessagingTemplate class as shown in the following Sender implementation: @Component class CustomerComponent { //other code here   Customer register(Customer customer){ customerRespository.save(customer); sender.send(customer.getEmail()); return customer; } }   @Component @Lazy class Sender { RabbitMessagingTemplate template;   @Autowired Sender(RabbitMessagingTemplate template){ this.template = template; }   @Bean Queue queue() { return new Queue("CustomerQ", false); }   public void send(String message){ template.convertAndSend("CustomerQ", message); } } The receiver on the other sideconsumes the message using RabbitListener and sends out an e-mail using theJavaMailSender component. Execute the following code: @Component class Receiver { @Autowired private  JavaMailSenderjavaMailService;   @Bean Queue queue() { return new Queue("CustomerQ", false); }   @RabbitListener(queues = "CustomerQ") public void processMessage(String email) { System.out.println(email); SimpleMailMessagemailMessage=new SimpleMailMessage(); mailMessage.setTo(email); mailMessage.setSubject("Registration"); mailMessage.setText("Successfully Registered"); javaMailService.send(mailMessage);       }   } In this case,CustomerNotification isour secondSpring Boot microservice. In this case, instead of the REST endpoint, it only exposes a message listener end point. Microservices challenges In the previous section,you learned about the right design decisions to be made and the trade-offs to be applied. In this section, we will review some of the challenges with microservices. Take a look at the following list: Data islands: Microservices abstract their own local transactional store, which is used for their own transactional purposes. The type of store and the data structure will be optimized for the services offered by the microservice. This can lead to data islands and, hence, challenges around aggregating data from different transactional stores to derive meaningful information. Logging and monitoring: Log files are a good piece of information for analysis and debugging. As each microservice is deployed independently, they emit separate logs, maybe to a local disk. This will result in fragmented logs. When we scale services across multiple machines, each service instance would produce separate log files. This makes it extremely difficult to debug and understand the behavior of the services through log mining. Dependency management: Dependency management is one of the key issues in large microservices deployments. How do we ensure the chattiness between services is manageable?How do we identify and reduce the impact of a change? How do we know whether all the dependent services are up and running? How will the service behave if one of the dependent services is not available? Organization's culture: One of the biggest challenges in microservices implementation is the organization's culture. An organization following waterfall development or heavyweight release management processes with infrequent release cycles is a challenge for microservices development. Insufficient automation is also a challenge for microservices deployments. Governance challenges: Microservices impose decentralized governance, and this is quite in contrast to the traditional SOA governance. Organizations may find it hard to come up with this change, and this could negatively impact microservices development. How can we know who is consuming service? How do we ensure service reuse? How do we define which services are available in the organization? How do we ensure that the enterprise polices are enforced? Operation overheads: Microservices deployments generally increases the number of deployable units and virtual machines (or containers). This adds significant management overheads and cost of operations.With a single application, a dedicated number of containers or virtual machines in an on-premises data center may not make much sense unless the business benefit is high. With many microservices, the number of Configurable Items (CIs) is too high, and the number of servers in which these CIs are deployed might also be unpredictable. This makes it extremely difficult to manage data in a traditional Configuration Management Database (CMDB). Testing microservices: Microservices also pose a challenge for the testability of services. In order to achieve full service functionality, one service may rely on another service, and this, in turn, may rely on another service, either synchronously or asynchronously. The issue is how we test an end-to-end service to evaluate its behavior. Dependent services may or may not be available at the time of testing. Infrastructure provisioning: As briefly touched upon under operation overheads, manual deployment can severely challenge microservices rollouts. If a deployment has manual elements, the deployer or operational administrators should know the running topology, manually reroute traffic, and then deploy the application one by one until all the services are upgraded. With many server instances running, this could lead to significant operational overheads. Moreover, the chance of error is high in this manual approach. Beyond just services– The microservices capability model Microservice are not as simple as the Customer Profile implementation we discussedearlier. This is specifically true when deploying hundreds or thousands of services. In many cases, an improper microservices implementation may lead to a number of challenges, as mentioned before.Any successful Internet-scale microservices deployment requires a number of additional surrounding capabilities. The following diagram depicts the microservices capability model: The capability model is broadly classified in to four areas, as follows: Core capabilities, which are part of the microservices themselves Supporting capabilities, which are software solutions supporting core microservice implementations Infrastructure capabilities, which are infrastructure-level expectations for a successful microservices implementation Governance capabilities, which are more of process, people, and reference information Core capabilities The core capabilities are explained here: Service listeners (HTTP/Messaging): If microservices are enabled for HTTP-based service endpoints, then the HTTP listener will be embedded within the microservices, thereby eliminating the need to have any external application server requirement. The HTTP listener will be started at the time of the application startup. If the microservice is based on asynchronous communication, then instead of an HTTP listener, a message listener will be stated. Optionally, other protocols could also be considered. There may not be any listeners if the microservices is a scheduled service. Spring Boot and Spring Cloud Streams provide this capability. Storage capability: Microservices have storage mechanisms to store state or transactional data pertaining to the business capability. This is optional, depending on the capabilities that are implemented. The storage could be either a physical storage (RDBMS,such as MySQL, and NoSQL,such as Hadoop, Cassandra, Neo4J, Elasticsearch,and so on), or it could be an in-memory store (cache,such as Ehcache and Data grids,such as Hazelcast, Infinispan,and so on). Business capability definition: This is the core of microservices, in which the business logic is implemented. This could be implemented in any applicable language, such as Java, Scala, Conjure, Erlang, and so on. All required business logic to fulfil the function is embedded within the microservices itself. Event sourcing: Microservices send out state changes to the external world without really worrying about the targeted consumers of these events. They could be consumed by other microservices, supporting services such as audit by replication, external applications,and so on. This will allow other microservices and applications to respond to state changes. Service endpoints and communication protocols: This defines the APIs for external consumers to consume. These could be synchronous endpoints or asynchronous endpoints. Synchronous endpoints could be based on REST/JSON or other protocols such as Avro, Thrift, protocol buffers, and so on. Asynchronous endpoints will be through Spring Cloud Streams backed by RabbitMQ or any other messaging servers or other messaging style implementations, such as Zero MQ. The API gateway: The API gateway provides a level of indirection by either proxying service endpoints or composing multiple service endpoints. The API gateway is also useful for policy enforcements. It may also provide real-time load balancing capabilities. There are many API gateways available in the market. Spring Cloud Zuul, Mashery, Apigee, and 3 Scale are some examples of API gateway providers. User interfaces: Generally, user interfaces are also part of microservices for users to interact with the business capabilities realized by the microservices. These could be implemented in any technology and is channel and device agnostic. Infrastructure capabilities Certain infrastructure capabilities are required for a successful deployment and to manage large-scale microservices. When deploying microservices at scale, not having proper infrastructure capabilities can be challenging and can lead to failures. Cloud: Microservices implementation is difficult in a traditional data center environment with a long lead time to provision infrastructures. Even a large number of infrastructure dedicated per microservice may not be very cost effective. Managing them internally in a data center may increase the cost of ownership and of operations. A cloud-like infrastructure is better for microservices deployment. Containers or virtual machines: Managing large physical machines is not cost effective and is also hard to manage. With physical machines, it is also hard to handle automatic fault tolerance. Virtualization is adopted by many organizations because of its ability to provide an optimal use of physical resources, and it provides resource isolation. It also reduces the overheads in managing large physical infrastructure components. Containers are the next generation of virtual machines. VMWare, Citrix,and so on provide virtual machine technologies. Docker, Drawbridge, Rocket, and LXD are some containerizing technologies. Cluster control and provisioning: Once we have a large number of containers or virtual machines, it is hard to manage and maintain them automatically. Cluster control tools provide a uniform operating environment on top of the containers and share the available capacity across multiple services. Apache Mesos and Kubernetes are examples of cluster control systems. Application lifecycle management: Application lifecycle management tools help to invoke applications when a new container is launched or kill the application when the container shuts down. Application lifecycle management allows to script application deployments and releases. It automatically detects failure scenarios and responds to them, thereby ensuring the availability of the application. This works in conjunction with the cluster control software. Marathon partially address this capability. Supporting capabilities Supporting capabilities are not directly linked to microservices, but these are essential for large-scale microservices development. Software-defined load balancer: The load balancer should be smart enough to understand the changes in deployment topology and respond accordingly. This moves away from the traditional approach of configuring static IP addresses, domain aliases, or cluster address in the load balancer. When new servers are added to the environment, it should automatically detect this and include them in the logical cluster by avoiding any manual interactions. Similarly, if a service instance is unavailable, it should take it out of the load balancer. A combination of Ribbon, Eureka, and Zuul provides this capability in Spring Cloud Netflix. Central log management: As explored earlier in this article, a capability is required to centralize all the logs emitted by service instances with correlation IDs. This helps debug, identify performances bottlenecks, and in predictive analysis. The result of this could feedback into the lifecycle manager to take corrective actions. Service registry: A service registry provides a runtime environment for services to automatically publish their availability at runtime. A registry will be a good source of information to understand the services topology at any point. Eureka from Spring Cloud, ZooKeeper, and Etcd are some of the service registry tools available. Security service: The distributed microservices ecosystem requires a central server to manage service security. This includes service authentication and token services. OAuth2-based services are widely used for microservices security. Spring Security and Spring Security OAuth are good candidates to build this capability. Service configuration: All service configurations should be externalized, as discussed in the Twelve-Factor application principles. A central service for all configurations could be a good choice. The Spring Cloud Config server and Archaius are out-of-the-box configuration servers. Testing tools (Anti-Fragile, RUM, and so on): Netflix uses Simian Army for antifragile testing. Mature services need consistent challenges to see the reliability of the services and how good fallback mechanisms are. Simian Army components create various error scenarios to explore the behavior of the system under failure scenarios. Monitoring and dashboards: Microservices also require a strong monitoring mechanism. This monitoring is not just at the infrastructurelevel but also at the service level. Spring Cloud Netflix Turbine, the Hysterix dashboard,and others provide service-level information. End-to-end monitoring tools,such as AppDynamic, NewRelic, Dynatrace, and other tools such as Statd, Sensu, and Spigo, could add value in microservices monitoring. Dependency and CI management: We also need tools to discover runtime topologies, to find service dependencies, and to manage configurable items (CIs). A graph-based CMDB is more obvious to manage these scenarios. Data lakes: As discussed earlier in this article, we need a mechanism to combine data stored in different microservices and perform near real-time analytics. Data lakesare a good choice to achieve this. Data ingestion tools such as Spring Cloud Data Flow, Flume, and Kafka are used to consume data. HDFS, Cassandra,and others are used to store data. Reliable messaging: If the communication is asynchronous, we may need a reliable messaging infrastructure service, such as RabbitMQ or any other reliable messaging service. Cloud messaging or messaging as service is a popular choice in Internet-scale message-based service endpoints. Process and governance capabilities The last in the puzzle are the process and governance capabilities required for microservices, which are: DevOps: Key in successful implementation is to adopt DevOps. DevOps complements microservices development by supporting agile development, high-velocity delivery, automation, and better change management. DevOps tools: DevOps tools for agile development, continuous integration, continuous delivery, and continuous deployment are essential for a successful delivery of microservices. A lot of emphasis is required in automated, functional, and real user testing as well as synthetic, integration, release, and performance testing. Microservices repository: A microservices repository is where the versioned binaries of microservices are placed. These could be a simple Nexus repository or container repositories such as the Docker registry. Microservice documentation: It is important to have all microservices properly documented. Swagger or API blueprint are helpful in achieving good microservices documentation. Reference architecture and libraries: Reference architecture provides a blueprint at the organization level to ensure that services are developed according to certain standards and guidelines in a consistent manner. Many of these could then be translated to a number of reusable libraries that enforce service development philosophies. Summary In this article,you learned the concepts and characteristics of microservices. We took as example a holiday portal to understand the concept of microservices better. We also examined some of the common challenges in large-scale microservice implementation. Finally, we established a microservices capability model in this article that can be used to deliver successful Internet-scale microservices.
Read more
  • 0
  • 0
  • 7995