Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Microservice Patterns and Best Practices
Microservice Patterns and Best Practices

Microservice Patterns and Best Practices: Explore patterns like CQRS and event sourcing to create scalable, maintainable, and testable microservices

Arrow left icon
Profile Icon Vinicius Feitosa Pacheco
Arrow right icon
$48.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (19 Ratings)
Paperback Jan 2018 366 pages 1st Edition
eBook
$39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $12.99p/m
Arrow left icon
Profile Icon Vinicius Feitosa Pacheco
Arrow right icon
$48.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (19 Ratings)
Paperback Jan 2018 366 pages 1st Edition
eBook
$39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $12.99p/m
eBook
$39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $12.99p/m

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Microservice Patterns and Best Practices

Chapter 2. The Microservice Tools

Some issues are always questionable or controversial when it comes to choosing a microservice stack. Much is discussed regarding the performance, practicality, cost, and scalability. Most of what is discussed is background views; many of these are valid opinions and many others not so much.

Obviously, the history of the development team should be considered in any technical decisions regarding the stack and implementation. However, at times, it is necessary to leave some comfort zones behind to develop a product. A comfort zone can be a programming language, a protocol, a framework, or a database, and they can limit a developer's ability to move at speed. The developed application then becomes more and more scalable.

In this chapter, you will be working on points that should be examined for internal discussions and development teams. In the end, it is always important to understand that the development stack is not an amusement park; to develop the best product, we should always be considering the aspects of cost and scalability.

Some criticisms will be made throughout this chapter. None of these criticisms seek to depreciate or affirm the best technology to be applied. All analysis is performed here with the full focus on the end product of this book, which is a news portal using the microservice architecture.

In this chapter, we'll look at the following:

  • Programming languages
  • Microservices frameworks
  • Binary communication
  • Message broker
  • Caching tools
  • Fail alert tools
  • Locale proof performance

Programming languages


Discussing programming languages is something that can be controversial, primarily because many developers tackle programming languages in a great hurry. However, programming languages should be seen as what they really are, a working tool. Every tool has a specific purpose and programming languages are no different.

It is just an analysis focused on our business, the news portal, which we will use to get to the point of how to select a language.

A big plus point for microservices is the heterogeneity of applications. In other words, it is not necessary to think of a single stack to apply to all business areas. We can thus define each microservice stack that applies, including when it comes to a programming language.

Basically, any programming language that meets the internet can be used in microservices. The difference is due to the requirements and domain boundaries that must be encoded. Some domain indicators can help us in this process.

If a microservice has strong mathematical processing load requirements, or where immutability of values is something positive, functional languages would be an interesting way to go. If there is a demand for processing large masses of data, then a compiled language with a robust virtual machine may be the answer.

Remember that missing this strategy could compromise the project deadline or even the entire application architecture. The fact is that several aspects should be analyzed before any definition, such as:

  • Proficiency
  • Performance
  • Development practicality
  • Ecosystem
  • Scalability cost

Proficiency

The first goal for a software developer is to achieve proficiency in any programming language or paradigm. Achieving a good level of proficiency is not easy, and some languages may have a steeper learning curve than others.

Problems arise when proficiency in a language ends up creating a comfort zone from which a developer or team finds it difficult to leave. In contrast, a myth must be overthrown: that one programming language is much easier than the other. Obviously, a language may prove simpler than another at first, but in the end what will count is the practice time and the number of scenarios experienced by a developer with a programming language.

Another myth that must be fought is that all languages are equal at their core and that only the syntax changes. This is one of the worst possible errors that can be committed. Languages can be quite different in internal design and performance, although they have similar syntaxes.

Proficiency is something that should be considered when deciding which language to apply for a microservice. However, it should not be as decisive as this one.

Performance

This is a key requirement in choosing a programming language for a microservice. When we talk about performance for microservices, there are many points where performance can be a problem: the network communication layer, access to the database, where the servers are available. All these points can be problematic for microservices. The programming language cannot be another can of slowness.

When the target is microservice performance, no matter the skill of the development team, it should be used for best second language benchmarks and stress tests.

Something that often creates misunderstanding is considering the speed of a development team to implement a feature and performance requirement. Performance is related to a metric similar to how a code behaves when responding to a request or performing a task. Definitely, personal or team performance is not included in this metric.

Development of practicality

This is the requirement responsible for measuring the speed applied to a feature going into production. Development of practicality touches two teams: the development team that already exists and the development team that can come into existence.

As has been said before, the word success can be a problem for an application and consequently for the product owners. Keeping the base code simple and understandable is fundamental to facilitating code changes and for implementing new features.

Good programming practices can help us to understand the legacy code, but often the language itself, because the verbiage is not very friendly.

There are scenarios where a programming language, given its characteristics, is extremely performative. But the cost of time to implement something new, though it may be simple, can be very expensive.

Think of a scenario where a start-up has just launched its product. The product is also a Minimum Viable Product (MVP) and was launched in the market to go through public validation in general. If this MVP succeeds, it is essential to publish new features as quickly as possible. In this case, the performance is not the problem, but the practicality of new interactions on the code.

When we are developing microservices and we decide to use this programming language it is an important aspect to be noted.

Ecosystem

The ecosystem of a programming language is a crucial one.

We all know that parts of the frameworks are almost essential to gain speed and simplicity in the development of any application. With microservices, the scenario is identical.

It is feasible that features are not developed by something being blocked on the technical side. Of course, the microservices architecture is providing a plurality of very broad tool options. However, understanding the possible drawbacks when choosing a programming language, and therefore inheriting its ecosystem, is critical to the engineering team responsible for the implementation.

There are cases where a programming language is very performative, but the ecosystem that would win on development speed compromises performance. This type of situation is far more common than you think.

Another point is when a language is very simple, but the frameworks are not mature enough; you end up generating unnecessary complexity.

Observing the ecosystem of a programming language, and understanding the risks it is assumed we gain by inheritance, is fundamental to the adoption of a language.

Scalability cost

The cost of scaling an application is linked to two major factors. The first is the speed of the selected stack used to implement the software. Specifically, the speed and capacity of processing algorithms and answering requests. The second factor is the ability to scale the application of the business part. How long is it applied to features and especially the predictability of new features? The time to create something new or redesigning something that already exists is also expensive.

With microservices architecture, the cost of scalability is usually related to the concept of having smaller areas and parts which are less integrated. Even then this cost is very important.

Think of two applications, one with strong interactivity with the end user, such as an online game or editing documents in real time. Another application is fully illustrative, has an editorial part, but is not open to all users; a newspaper or streaming provider are good examples.

The application of real-time data processing and response time to requests must be fast and dynamic. In the second, application processing, it is not something that has much relevance, as the information may be in a cache or statically stored.

The cost will be high when you do not understand the nature of the microservice to be developed.

Making choices for our application

Because this is the point to choose a programming language for microservices and applications in general, we will apply this knowledge to select which programming language we use in each area of our service.

We know that our news portal has the following areas:

  • SportNewsService
  • PoliticsNewsService
  • FamousNewsService
  • RecommendationService
  • UsersService

Given our fields of business, we can divide them by similarity of features. SportNewsService, PoliticsNewsService, and FamousNewsService have similar behavior. These microservices are news providers and are more focused on the consumption of data than receiving information.

These microservices may have an identical starting stack, which does not mean they should always be identical or that they need evolve in the same direction. Regarding the programming language, performance is not as crucial, but the speed of change and implementation of new features is crucial.

RecommendationService is very different from the other microservices. There is no direct interaction between the end user and the application. Nor is there a direct interaction between the editorial area and this software. RecommendationService is a support microservice; there are other microservices and all the interactions and operations are on  the technical side. Loading and interaction occur completely asynchronously, but processing will certainly be higher than in other microservices. However, this is not a real-time application.

UserServices is a microservice with dynamic interaction on all sides, both for the end user as well as the editorial layer. The information from UserService may also be consumed by the other microservices such as RecommendationService. It is noteworthy that on this layer, caches can be dangerous, providing wrong information if they are not correctly invalidated. As UserService is a microservice that supports a direct relationship with the internet, it leads us to a programming language that has the speed of response for requests, processing speed, simplicity in implementing features, and good asynchrony APIs.

With the characteristics of each sector in mind, it is the time to think in a completely practical way to select a programming language that applies to each microservice. Let's consider each of the five aspects mentioned at the beginning of the chapter associated with the nature of the issues.

Comparing programming languages is complex, but in this case, we need to make choices. Many languages could be compared. However, for our application, we have chosen five for the comparison based on popularity, personal experience, documentation, and actual cases of applicability. The languages are Java, C#, Python, JavaScript, and Go.

Java

Java caters perfectly to the object-oriented paradigm. It is very performative, which reduces the cost of scalability. With the developments presented in the language, Java is not as verbose as before but still has a sharp verbiage, requiring developers to have the great proficiency to maintain the code and implement new features. Regarding the ecosystem, Java Virtual Machine is fantastic, mature, and very stable, but the frameworks are usually not as simple as they could be.

C#

Just like Java, C# perfectly meets the OOP paradigm. It is very performative, reducing the cost of scalability. C# has a similar verbiage to Java, with some additional practicalities. Proficiency with C# must also be high to generate speed in development. The ecosystem is very mature, with performers and not as complex frameworks.

Python

Python does not have the syntactic features of Java, C#, and OOP, but it caters well to the paradigm. As a fully interpreted language, it is not as performant as the other languages mentioned previously, which means more servers are required to support the same load that interpreted languages do not support. However, the higher cost of scalability is fully compensated when it comes to code maintenance and new features development, due to the simplicity of the language. A developer needs a fairly short amount of time to achieve proficiency in the language. The ecosystem is full of simple frameworks. Within the same ecosystem, there is a range of options for language interpreters, which helps in additional performance gains.

JavaScript

Undoubtedly, JavaScript is the language that generates less friction between the frontend and backend, as the backend JavaScript is optional, and the frontend is practically mandatory. It is a language with a good level of complexity. The developer needs to know the internal behavior of JavaScript well to not make bizarre mistakes. The paradigm that best applies to JavaScript is functional. The ecosystem has vast frameworks, too many in some cases, generating complexity especially regarding the builds. It has good performance, but it requires good proficiency to maintain the code and write new features.

Go

Go is a compiled programming language and has great performance. No doubt it is one of the languages that has seen growing popularity in recent years. Go is a language with the imperative paradigm, although very few developers understand that there is some level of OOP. The ecosystem's main characteristic is a standard robust library, making frameworks in some cases unnecessary. But the ecosystem is not perfect, with problems in simple things like version control. Go has simple and easily readable syntax. The main feature of the Go syntax is the convenience that it applies and how it handles concurrent programming.

To make it easier to compare, we will use the following chart as support:

In the case of our news portal, these are the languages that will be applied:

  • SportNewsService, PoliticsNewsService, and FamousNewsService: These microservices make use of Python. It is the typical scenario where language applies very well. The lower performance of Python will not be a problem even if the portal receives many hits.
  • RecommendationService: This application also uses Python. This choice is fully related to a connection between performance and practicality in using other tools that will make the stack of our microservices. A microservice like this does not have the requirement of being real time; we can use something that has simplified APIs and is not as disruptive to the rest of the ecosystem.
  • UserService: These microservices apply Go. UserService is a microservice that interacts with users as well as providing information to other application microservices.

The fact is that there is no perfect tool for everything. All languages have shown positive and negative points. With the great technological plurality that we can apply to the microservices architecture, managing the stack that best applies in each scenario is our role in this case.

Microservice frameworks


When dealing with frameworks, we have to think that because of the technological diversity of our frameworks, we will have at least three frameworks instead of one to keep our ecosystem.

Of course, we could have kept all microservices in the same stack. However, searching for the best overall performance for each domain, we opted for a more plural stack.

Obviously, at first, the impression is that the complexity will be higher than expected. But this type of complexity is matched by the performance most suitable for each case.

Basically, we chose three different programming languages to use on our news portal. Python, Go, and C# are those languages. It is time to think about which frameworks we will use for each of these languages, thus taking another step in shaping our development stack for each microservice.

Python

In the world of Python, there is a multitude of interesting frameworks; Bottle, Pyramid, Flask, Sanic, JaPronto, Tornado, and Twisted are some examples. The most famous that has more support from the community is Django.

Django is the same framework that was used to build our news portal in the monolithic version. It is a very good tool for full-stack applications and has, the main characteristic, and the simplicity to mount this type of software.

However, for microservices, I do not think it's the best framework for this purpose. Firstly, because the exposure of APIs is not native to Django. The ability to create an API with Django comes from a Django app called Django Framework Rest. In order to maintain the standard structure of the Django framework, the design of a simple API was a bit compromised. Django provides the developer with the entire development stack, which is not always what you want when developing microservices. A more simplistic and more flexible framework for compositions can be interesting.

In the Python ecosystem, there are other frameworks that work very well when it comes to APIs. Flask is a good example. An API, Hello world, is made in a very simple way.

Installing Flask with the command:

$ pip install flask

Writing the endpoint is very simple:

# file: app.py# import de dependenciesfrom flask import Flask, jsonify

Instantiating the framework:

app = Flask(__name__)

Declaring the route:

@app.route('/')def hello():    #Prepare the json to return    return jsonify({'hello': 'world'})

Executing the main app:

if __name__ == '__main__':    app.run()

The preceding code is enough to return the message hello world in JSON format.

Other frameworks were inspired by Flask and worship the same simplicity and very similar syntax. A good example is Sanic.

Sanic can be installed with the command:

    $ pip install sanic 

The syntax is almost identical to Flask, as can be seen in the following code:

# file: app.py

Importing the dependencies:

from sanic import Sanicfrom sanic.response import json

Instantiating the framework:

app = Sanic()

Declaring the route:

@app.route("/")async def test(request):    #Prepare the json to return    return json({"hello": "world"})

Executing the main app:

if __name__ == "__main__":    app.run(host="0.0.0.0", port=8000)

The big difference between Flask and Sanic is the performance. The performance of Sanic is much higher than Flask due to use of newer features in Python and the possibility of the exchange of the Sanic framework. However, Sanic is still not as mature and experienced in the market as Flask.

Another plus point for Flask is the integration with other tools like Swagger. Using the two together, our APIs would be not only written but also documented appropriately.

For microservices using Python as a programming language, performance is not the most important requirement, but practicality. Use this framework as Flask microservices.

Go

This is a totally different Python case. While most Python frameworks help with performance and try to provide the best possible environment for development, Go is not the same. Usually, frameworks with many features end up compromising the performance of the language.

Logs

When it comes to logs, they are virtually unanimous. You may refer to the logrus document  (https://github.com/Sirupsen/logrus). It is a very mature and flexible logging library for Go, having hooks for different tools, ranging from the syslog to the InfluxDB for example.

The logrus has main features, that is, speed up on record logs and practicality of implementation.

Handlers

Creating routes to APIs in Go is very simple, but using native handlers' options can generate certain complexities, especially regarding the validations. The native muxer does not have a lot of flexibility, so the best option is to seek more productive tool handlers.

Go has a multitude of options for handlers, and perhaps the most explored library model because of the characteristic of writing the low-level language.

When it comes to performance for routers in Go, at the time of release of this book, there is nothing more performative than fasthttp: (https://github.com/valyala/fasthttp).This is a library written using Go that provides low-level language. Fasthttp metrics are outstanding.

Here are the numbers running tests locally to provision static files:

Running 10s test @ http://localhost:8080  4 threads and 16 connections  Thread Stats   Avg      Stdev     Max   +/- Stdev    Latency   447.99us    1.59ms  27.20ms   94.79%    Req/Sec    37.13k     3.99k   47.86k    76.00%  1478457 requests in 10.02s, 1.03GB readRequests/sec: 147597.06Transfer/sec:    105.15MB

As it can be seen, the number of requests per second exceeds 140,000. However, writing the routes using fasthttp can be as complex as with the native library. Due to this problem, there are some frameworks that do interface for fasthttp. One of these interfaces is the fasthttprouter (https://github.com/buaazp/fasthttprouter), which ultimately creates a number of development facilities without overly compromising the good performance of fasthttp.

Writing routes with extreme performance is very seductive, but we need a balance between performance and stability; here we have a point needing attention. Fasthttp, as well as all its aid interfaces, modifies the native standard of the handler Go to implement the context itself. If there really is a big performance problem, using fasthttp may be something to think about. I do not think this is our case. Therefore, it is recommended to use something that has more compatibility with the standard Go interface.

The most famous option is gorilla/mux (https://github.com/gorilla/mux). Without a doubt, it is one of the most mature and experienced libraries for Go.

Middleware

For the composition of our middleware, use Negroni (https://github.com/urfave/negroni). In addition to being a very mature tool, it has complete compatibility with the Mux Gorilla and the native API Go.

Tests

For unit testing, use the testify: (https://github.com/stretchr/testify). It is a simple library which accrues in both assertions and mocks. For functional testing, use the default Go library.

Package manager

If the Go ecosystem has a weak point, it is this. Go dependency management has always been something that requires a lot of attention.

If you do not know, the official repository of the Go dependencies is Git. Exactly, all Git, no matter if it's GitHub, Bitbucket, or any other. The problem is that when downloading a dependency using the Go command (go get ...), the version that will come to the application is always the one in the master repository. So there is no strict control of additions.

A package manager will use godep (https://github.com/tools/godep). This is a simple tool, which controls the versions used in the project guarding a JSON file with the repository URL and hash Git history.

Golang ORMs

A feature which is adopted by the Gophers, the name was given to developers using Go, is not using ORMs. Often the preference is to use only the communication driver in the database.

Often Gophers dispense the using of an ORM to adopt just a tool that helps to make more practical information from the database in a Go struct. A tool of this type of relational database is SQLX (https://github.com/jmoiron/sqlx).

SQLX does not work like ORM; it is only library to create a more friendly interface for native Go packages to communicate with the database/SQL.

If the chosen database application is to be NoSQL, it will hardly be adopting any data interpretation tools, as the most practical method is to use only the available driver.

Binary communication – direct communication between services


Much is discussed about microservices communication; topics such as protocols, layers, types, and package sizes are widely discussed when it comes to the subject.

The point is that communication between microservices is the most critical topic for project success. It is very clear that the amount of positive factors increases with microservices architecture, but how to make the communication that does not encumber the performance of a product to the end user is the key point.

It does not help that all the practicalities of developing and deploying the product do not scale or the end user experience is compromised.

There is a lot of literature and study material on the subject, but the challenge still remains. And oddly enough, even with all the available material, making mistakes in this part of the project is extremely easy.

There are only two forms of communication between microservices. These forms are synchronous and asynchronous. The most common is asynchronous communication between microservices, as it is easier to scale but it is harder to understand possible error points. Using synchronous forms of communication between microservices, it is easier to understand the possible errors in this area, but it is more difficult to scale. In this segment, we will deal with synchronous communication.

Understanding the aspect

The first step is to understand the functioning of the microservice to know what kind of communication best applies. Take, for example, the microservice recommendations. It is a microservice that has no direct communication with the customer, but traces a user's profile. No other application point expects an immediate response arising from this microservice. Thus, the communication model for this microservice is asynchronous.

Very well! We saw that RecommendationService is not a synchronous case; then what is?

The answer is UserService. When a user enters a given API that communicates with UserService, this user sees the change immediately. When a microservice requests some information on the requested UserService, we want the most current information possible and immediately. Yes, UserService is a service where synchronous communication can be applied.

But how can we create a good layer of synchronous communication between microservices? The answer is right in the next section.

Tools for synchronous communication

The most common form of direct communication between microservices is using the HTTP protocol with Rest and passing JavaScript Object Notation, the famous JSON. Communication works great for APIs, providing endpoints for external consumption. However, communication using HTTP with JSON has a high cost in relation to performance.

First, this is because in the case of communication between microservices, it would be more appropriate to optimize than the HTTP protocol creating some sort of pipeline or keeping the connection alive. The problem is the control of the connection timeout, which shouldn't be very strict, and in addition could start to close doors, threads, or processes with a simple silent error. The second problem with this approach is the serialization time of JSON sent. Normally this is not an inexpensive process. Finally, we have the packet size sent to the HTTP protocol. In addition to JSON, there are a number of HTTP headers that should be interpreted further which may be discarded. Look closer; there's no need to elaborate  on protocols between microservices, the only concern should be to maintain a single layer for  sending and receiving messages. Therefore, the HTTP protocol with JSON to communicate between microservices can be a serious slow point within the project and, despite the practicality of implementation of the protocol, the optimization is complex to understand yet not very significant.

Many would propose the implementation of communication sockets or WebSockets but, in the end, the customization process of these communication layers is very similar to the classic HTTP.

Synchronous communication layers between microservices must complete three basic tasks:

  • Report practice and direct the desired messages
  • Send simple, lightweight packages and fast serialization
  • Be practical to maintain the signature of communication endpoints

A proposal that meets the aforementioned requirements communicates using binary or small-size packages.

 

Something important to point out when it comes to working with this type of protocol is that, usually, they are incompatible with each other. This means that the option chosen as the tool for serialization and submission of these small-sized packages should be compatible with the stack of all microservices.

Some of the most popular options on the market are:

Let us understand how each of these options works to see what best fits on our news portal.

MessagePack

The MessagePack or MsgPack is a type of serializer for binary information, but, as the official tool's own website says, "It's like JSON, but fast and small."

The proposed MsgPack is serializing data quickly and with reduced size, thus offering a more efficient package for communication between microservices. At first, the MsgPack was not more efficient than the other serializers, but this problem has been overcome with this change.

When it comes to compatibility between the programming languages, MsgPack is very good; it just has a library of the most well-known languages of the market. The offer goes from Python libraries to Racket, for example.

The MsgPack does not have a native tool in the shipping package; it is left to the developer. This can be a problem because a layer of communication between microservices that supports multilingual stacks still needs to be found.

gRPC

The gRPC has a more complete proposal than MsgPack because it is composed of the data serializer Protobuf, as a layer between communication services making use of RPC.

For serialization, create a .proto file with the information about what will be serialized for RPC communication following a client/server model if needed.

The following code can be seen as an example of a .protocol file, that was extracted from the official site tool:

The greeting service definition:

service Greeter { 

Sends a greeting:

  rpc SayHello (HelloRequest) returns (HelloReply) {} 
} 

The request message containing the user's name:

    message HelloRequest { 
      string name = 1; 
    }  

The response message containing the greetings:

    message HelloReply { 
      string message = 1; 
   } 

The file .proto has a specific form of writing. The positive aspect of a file like this is that the signing of the communication layer is normalized because, at some level, the file created as the serialization template and creating clients/servers ends up serving as the  documentation of endpoints.

After the file is created, to create the communication part you need only run a command line. The following example creates the client/server in Python:

$ python -m grpc_tools.protoc -I../../protos --python_out=. --grpc_python_out=. ../../protos/file_name.proto

The command may seem a little intimidating at first but is enough to generate a client and server RPC of communication. The gRPC has evolved a lot and received strong investment.

With regards to the compatibility, gRPC does not meet the same requirements that MsgPack does, but has compatibility with the most commonly used languages in the market.

Apache Avro

Avro is one of the most mature and experienced serialization systems for binary. As with gRPC, Avro also has a communication layer using RPC.

Avro uses a .avsc file, which is defined in JSON format, for the serialization process. The file may be composed of both types that provide JSON, or more complex types from Avro itself.

Even being very mature as a tool, Avro is the poorest in terms of native compatibility with other programming languages other than Java, Ruby, C++, C#, and Python. As the project is open source, there is a whole range of drivers that provide compatibility with Avro that come from the community.

Apache Thrift

Thrift is a project created by Facebook and maintained by the Apache Software Foundation. It has a good level of compatibility with the languages most commonly used in the market of programming.

Thrift has the communication layer with RPC and a part of serialization using a file .thrift as a template. The file .thrift has notation and types similar to the C++ language in which Thrift was developed.

An example of file .thrift can be viewed in the following:

    typedef i32 MyInteger 
 
    const i32 INT32CONSTANT = 9853 
    const map<string,string> MAPCONSTANT = {'hello':'world', 
          'goodnight':'moon'} 
 
    enum Operation { 
      ADD = 1, 
      SUBTRACT = 2, 
      MULTIPLY = 3, 
      DIVIDE = 4 
    } 
 
    struct Work { 
      1: i32 num1 = 0, 
      2: i32 num2, 
      3: Operation op, 
      4: optional string comment, 
    } 
 
    exception InvalidOperation { 
      1: i32 whatOp, 
      2: string why 
    } 
 
    service Calculator extends shared.SharedService { 
      void ping(), 
      i32 add(1:i32 num1, 2:i32 num2), 
      i32 calculate(1:i32 logid, 2:Work w) throws
           (1:InvalidOperation ouch), 
      oneway void zip() 
    } 
 

Do not worry about the file contents. The important thing is realizing the flexibility that is offered by the RPC Thrift composition. An interesting point to note is the following line of code:

    service Calculator extends shared.SharedService { ...  

Thrift allows the use of inheritance among the template files, which will be used by code generators.

To create the client/server using Thrift, simply use the following command line:

   $ thrift -r --gen py file_name.thrift

The preceding line will create a client and server in the Python programming language.

Among the options presented, the most common at the moment are Thrift and gRPC, and any one of these tools is a good deployment option for direct communication between microservices.

Direct communication alerts

Direct communication between microservices may result in a problem known as Death Star. The Death Star is an anti-pattern where there is communication between the recursion microservices, and making progress becomes extremely complicated or expensive for a product.

With the communication tools we saw previously, it is very easy to establish conversations between microservices with low latency. The common anti-pattern is to allow microservices to exchange messages with each other freely, if they have no information to process a specific task.

This is where we have an alert. If a microservice always needs to communicate with another to complete a task, it is a high coupling signal and we have failed in our DDD process. This engagement results in a Death Star. For clarity, consider the following scenario.

Imagine that we have four microservices. The microservices are A, B, C, and D. A request was made asking for information about A, but it does not have all the information content. This content is in B and C, but C does not have all of the information, so it asks D. B is not able to complete the task assigned to him and asks for data from C. However, D needs the data in A. The following is a diagrammatic representation of this process:

In the end, a simple request generates a very complex flow, where any failure is difficult to monitor. Apparently, it may seem natural, but over time and with the creation of new microservices, it makes this ecosystem unsustainable.

The microservices must be sufficiently well defined in their respective responsibilities for this type of messaging to be minimized.

No matter how fast the communication and serialization information is, if the product is not humanly intelligible and understandable, it will be very difficult to maintain the ecosystem of microservices, especially with regards to error control.

Message broker – Async communication between services


In the previous topic, we talked about synchronous communication between microservices using binary and alternatives to REST. This topic will deal with the communication between microservices using message broker, that is, a messaging system with a physical element, a communication layer, and a message bus.

With messaging systems, it is impossible to reproduce the Death Star. The design of the Death Star in a more robust application would be something like the following:

The diagram of a messaging system is totally different, similar to the one shown in the following:

The message bus can be used for both synchronous and asynchronous communication, but certainly, the major point of emphasis of the message bus is in asynchronous communication.

You may wonder, if the messaging diagram is simpler and you can use this type of tool for synchronous communication, why not use this messaging for all types of communication between microservices?

The answer to this question is quite simple. A message bus is a physical component within the stack of microservices. It needs to be scaled just like any other physical component-based data storage and cache. This means that with a high-volume message, the synchronous mode of communication could be committed to an unwanted delay in the responses of the processes.

It is critical to the engineering team to understand where to correctly apply each tool without compromising the stack because of an apparent ease.

Within the various message brokers, there are some that stand out more, such as:

  • ActiveMQ
  • RabbitMQ
  • Kafka

Let us understand the functioning of each of them a little better.

ActiveMQ

ActiveMQ is extremely traditional and very experienced. For years it was like the standard message bus in the Java world. There is no doubting the maturity and robustness of ActiveMQ.

It supports the programming languages used in the market. The ActiveMQ problem is related to the most common communication protocol, STOMP. Most mature libraries use ActiveMQ STOMP, which is not one of the models of sending more message performers. The ActiveMQ has been working on OpenWire for a solution in place of STOMP, but so far it is only available for Java, C, C++, and C#.

The ActiveMQ is very easy to implement, has been undergoing constant evolution, and has good documentation. If our application, the news portal, was developed on the Java platform, or any other that supports OpenWire, ActiveMQ is a case to be considered carefully.

RabbitMQ

RabbitMQ uses AMQP, by default, as the communication protocol, which makes this a very performative tool for the delivery of messages. RabbitMQ documentation is incredible and has native support for the languages most used in the market programming.

Among the many message brokers, RabbitMQ stands out because of its practicality of implementation, flexibility to be coupled with other tools, and its simple and intuitive API.

The following code shows how simple it is to create a system of Hello World in Python with RabbitMQ:

# import the tool communicate with RabbitMQ 
    import pika 
    # create the connection 
    connection = pika.BlockingConnection( 
      pika.ConnectionParameters(host='localhost')) 
    # get a channel from RabbitMQ 
    channel = connection.channel() 
    # declare the queue 
    channel.queue_declare(queue='hello') 
    # publish the message 
    channel.basic_publish(exchange='', 
                      routing_key='hello', 
                      body='Hello World!') 
    print(" [x] Sent 'Hello World!'") 
    # close the connection 
    connection.close() 

With the preceding example, we took the official RabbitMQ site, we are responsible for sending the message Hello World queue for the hello. The following is the code that gets the message:

    # import the tool communicate with RabbitMQ 
    import pika 
  
    # create the connection 
    connection = pika.BlockingConnection( 
    pika.ConnectionParameters(host='localhost')) 
 
    # get a channel from RabbitMQ 
    channel = connection.channel() 
 
    # declare the queue 
    channel.queue_declare(queue='hello') 
  
    # create a method where we'll work with the message received 
    def callback(ch, method, properties, body): 
      print(" [x] Received %r" % body) 
 
    # consume the message from the queue passing to the method 
          created above 
    channel.basic_consume(callback, 
                          queue='hello', 
                          no_ack=True) 
 
    print(' [*] Waiting for messages. To exit press CTRL+C') 
 
    # keep alive the consumer until interrupt the process 
    channel.start_consuming() 
 

The code is very simple and readable. Another feature of RabbitMQ is the practical tool for scalability. There are performance tests that indicate the ability of RabbitMQ in supporting 20,000 messages per second for each node.

Kafka

Kafka is not the simplest message broker on the market, especially with regards to the understanding of its inner workings. However, it is by far the most scalable and is message broker performative for the delivery of messages.

In the past, unlike ActiveMQ and RabbitMQ, Kafka was not sending transactional messages, which could be a problem in applications where losing any kind of information was a high cost. But in the most recent versions of Kafka this problem has been solved, and currently, there are transactional shipping options.

In Kafka, the numbers are really impressive. Some benchmarks indicate that Kafka supports, without difficulty, over 100,000 messages per second for each node.

Another strong point of Kafka is the ability to integrate with various other tools like Apache Spark. Kafka has good documentation. However, Kafka does not have a wide range of supported programming languages.

For cases where performance needs to reach high levels, Kafka is more than a suitable option.

Note

For our news portal, we adopt RabbitMQ due to compatibility and the good performance that the tool has, compatible with the current situation of our application.

Caching tools


For microservices and modern web applications, the cache is not the only tool that exempts the database. It is a matter of strategy. Something that can be widely used to make the application much more performative than it would be without caches. But choosing well and setting the cache layer are crucial to success.

There are cache strategies consisting of using the cache as a loading point for the database. Observe the following diagram:

In the preceding diagram, we see that the requests arrive for our API, but are not directly processed and sent to the database. All valid requests are cached and simultaneously put in a row.

Consumers read the queue and process the information. Only after processing the information is the data stored in the database. Eventually, it is rewritten in the cache for data updates that are being consolidated in the database. With this strategy, any information requested by the API will be placed directly in the cache before it passes through the database, so that the database has the time required for processing.

For the end user, 200 is the HTTP response that is sent as soon as the data is stored in the cache, and not only after the registration of the information in the database, but also as this process occurs in an asynchronous way.

To have the possibility of this kind of strategy, we have to analyze the tools we have available. The best known on the market are:

  • Memcached
  • Redis

Let's look at the features of each.

Memcached

When it comes to Memcached, caching is one of the most known and mature markets. It has a key scheme/value storage for very efficient memory.

For the classic process of using cache, Memcached is simple and practical to use. The performance of Memcached is fully linked to the use of memory. If Memcached uses the disc to register any data, the performance is seriously compromised; moreover, Memcached does not have any record of disk capacity and always depends on third-party tools for this.

Redis

Redis can be practically considered as a new standard for the market when it comes to cache. Redis is effectively a database key/value, but because of stupendous performance, it has ended up being adopted as a caching tool.

The Redis documentation is very good and easy to understand; even a simple concept is equipped with many features such as pub/sub and queues.

Because of its convenience, flexibility, and internal working model, Redis has practically relegated all other caching systems to the condition of the legacy project.

Control of the Redis memory usage is very powerful. Most cache systems are very efficient to write and read data from memory, but not to purge the data and return memory to use. Redis again stands out in this respect, having good performance to return memory for use after purging data.

Unlike Memcached, Redis has native and extremely configurable persistence. Redis has two types of storage form, which are RDB and AOF.

The RDB model makes data persistent by using snapshots. This means that, within a configurable period of time, the information in memory is persisted on disk. The following is an example of a Redis configuration file using the RDB model of persistence:

save 60 1000stop-writes-on-bgsave-error nordbcompression yesdbfilename dump.rdb

The settings are simple and intuitive. First, we have to save the configuration itself:

save 60 1000

The preceding line indicates that Redis should do the snapshot to persist the data home for 60 seconds, if at least 1,000 keys are changed. Changing the line to something like:

   save 900 1

Is the same as saying to Redis persist a snapshot every 15 minutes, if at least one key is modified.

The second line of our sample configuration is as follows:

    stop-writes-on-bgsave-error no

It is telling Redis, even in case of error, to move on with the process and persistence attempts. The default value of this setting is yes, but if the development team decided to monitor the persistence of Redis the best option is no.

Usually, Redis compresses the data to be persisted to save disk usage; this setting is:

     rdbcompression yes 

But if the performance is critical, with respect to the cache, this value can be modified to no. But the amount of disk consumed by Redis will be much higher.

Finally, we have the filename which will be persisted data by Redis:

    dbfilename dump.rdb

This name is the default name in the configuration file but can be modified without major concerns.

The other model is the persistence of AOF. This model is safer with respect to keeping the recorded data. However, there is a higher cost performance for Redis. Under a configuration template for AOF:

appendonly noappendfsync everysec

The first line of this example presents the command appendonly. This command indicates whether the AOF persistence mode must be active or not.

In the second line of the sample configuration we have:

     appendfsync everysec

The policy appendfsync active fsync tells the operating system to perform persistence in the fastest possible disk and not to buffer. The appendfsync has three configuration modes—no, everysec, and always, as shown in the following:

  • no: Disables appendfsync
  • everysec: This indicates that the storage of data should be performed as quickly as possible; usually this process is delayed by one second
  • always: This indicates an even faster persistence process, preferably immediately

You may be wondering why we are seeing this part of Redis persistence. The motivation is simple; we must know exactly what power we gain from the persistent cache and how we can apply it.

Some development teams are also using Redis as a message broker. The tool is very fast in this way, but definitely not the most appropriate for this task, due to the fact that there are no transactions in the delivery of messages. With so many, messages between microservices could be lost. The situation where Redis expertly performs its function is as a cache.

Fail alert tools


Just as we prepare our product to be successful, we must also prepare for failures. There is nothing worse in microservices than silent errors. Receiving faulty alerts as soon as possible is critical, which is considered to be a healthy microservices ecosystem.

There are at least four major points of failure when it comes to microservices. If these points are covered, we can say that about 70% of the application is safe. These points are as follows:

  • Performance
  • Build
  • Components
  • Implementation failures

Let's understand what each of these risk points are and how we can receive failure alerts as soon as possible.

Performance

Let's look a little further at some very interesting tools to prove the performance of our endpoints. Local test endpoints help to anticipate performance issues that we would only see in production.

After sending the microservices to the production environment, some tools can be used to monitor the implementation of the performance as a whole. There are both free, as well as paid tools, and some very effective tools like New Relic and Datadog. Both are very simple to implement and have a dashboard rich in information:

The following screenshot represents the interface of DATADOG:

Obviously, there are options for performance monitoring that are totally free, as we have the traditional Graphite with Grafana and Prometheus. The free options require more settings than those mentioned previously to provide similar results.

From the free options, Prometheus deserves a special mention because of its wealth of information and practical implementation. Along with Graphite, Prometheus also integrates with Grafana for displaying graphics performance. The following screenshot represents the use of Prometheus:

Build

This is a very important point because this is the last step in which a failure can be located without affecting the end user. One of the pillars of microservices architecture is processing automation. To build and deploy is no different.

The time to build is usually the last stage before moving the application to a particular environment, a quality environment, or stage production.

In microservices, all must have high coverage for unit testing, functional testing, and integration testing. It seems obvious to say, but many development teams do not end up paying too much attention to automated testing and suffer for it later.

To automate the application build process, and consequently, the application deployment, is fundamentally a good continuous integration tool or CI. In this respect, one of the most mature, complete, and efficient tools is Jenkins. Jenkins is a free and open source project. It is extremely configurable, being able to fully automate processes.

There are other options like Travis. Travis works online with a CI and is completely free for open source projects. Something interesting in Travis, is the great compatibility that it has with GitHub.

The most important factor of working with a CI is properly setting up the application testing process, because, as has been said before, this is the last stage to capture failures before affecting the end user of our product. The CI is the best place for the integration of microservices tests.

Components

The strong characteristic of microservices architecture is the large number of components that can fail. Containers, databases, caches, and message brokers serve as examples of failure points.

Imagine the scenario where the application begins to fail, simply because the hard drive of a database is faulty in some physical component. The time of action in applications where there is no monitoring for this type of problem is usually high because normally the application and the development and support teams always start investigating failures on the software side. Only after confirming that the fault is not in the software do teams seek problems in physical components.

There are tools like pens-sentinel to provide more resilience to the pens, but not all the physical components have that kind of support.

A simple solution is to create a health check endpoint within each microservice. This endpoint is not only responsible for validating the microservice instance, whether it is running, but also all the components that the microservice is connected to. Tools like Nagios and Zabbix are also very useful for making aid work to health check endpoints.

Implementation gaps

In some cases, automated tests may not have been well written and do not cover all cases, or some external component, such as an API vendor, starts throwing errors in the application.

Often these errors are silent, and we only realize after a user reports the error. But questions remain as to how many users have experienced this error and not reported it. What value loss level mistake did the product have?

These questions have no answers and are almost impossible to quantify. To capture this kind of problem as quickly as possible all the time, we need to monitor the internal failures of the application.

For this type of monitoring, there are a number of tools, but the more prominent is Sentry. Sentry has very interesting features:

  • See the impact of new deployments in real time
  • Provide support to specific users interrupted by an error
  • Detect and thwart fraud as it's attempted: unusual amounts of failures on purchases, authentication, and other critical areas
  • External integrations

Sentry has a cost and unfortunately, there is no free option that is effective.

With the four fault points covered by warning systems, we are safe to continue with our development and put into production our microservices with the automated and continuous process.

The databases


The programming with languages and development frameworks, this type of database is not good enough to cover all the uses of application scenarios.

To choose to use a database is necessary to assess the functioning and microservice operation mode in question. There are situations where the relational database makes sense, at other times one NoSQL can be better, and there are, of course, situations where none of these databases are sufficient and we need to use the database using graphs. This is exactly the situation in our news portal.

It would be very simple to say that SQL is sufficient for all, but it definitely is not. Think of our microservices to see what kind of database is best for each. The following application has these areas:

  • SportNewsService
  • PoliticsNewsService
  • FamousNewsService
  • RecommendationService
  • UsersService

For microservices that directly show news, if you think about it, there is a direct relationship system. The purpose of these microservices is to simply and quickly bring out the news theme by batch. This behavior is very close to NoSQL, where the relational structure is weak and the speed of basic operations such as searches are faster.

For services and the features previously mentioned, use one database NoSQL, which in this case will be the MongoDB. This choice includes good documentation, and performance benchmarks of implementation simplicity and cost.

UsersService is totally different. A user's credentials for an application is only a login and a password, but there is other data that is related to this login and password. This will be the case for any possible registration data or default preference. So, we have information-relevant relationships in this area, but the best option is to use a conventional SQL.

The UsersService can use a database such as MariaDB, MySQL, Oracle, SQL Server, or PostgreSQL, for example. In the case of our application, use PostgreSQL due to compatibility features, maturity, and compatibility with the rest of the stack already selected.

There is another microservice with an entirely different function from the previous microservices. The RecommendationService should establish the relationship between preferences and users, but not only that. You should also be able to provide what or how many users are interested in the same topic. You can create this type of relationship with a conventional SQL database. However, over time, increasingly complex queries will emerge, and possibly a microservices understanding of speed, as well as maintenance, can be compromised by a bad choice in the stack of conception.

For RecommendationService, a good option is to adopt the database Neo4J. The quality of work with graphs and simplicity of the tool is exactly what we are looking for in this microservice.

The big target when it comes to the database, is again, understanding how the fields behave and not putting in any comfort zones at the time of choice. The most important thing is always to choose the best tool for each case.

Locale proof performance


One of the worst situations that can occur when we are working with microservices architecture is to put a code in production and see that the performance is poor. The work to bring the code back to the development environment, knowing that production in the project is compromised and the users are going through a bad experience, when it is something that could have been analyzed on the technical side, is extremely frustrating.

The problem now in production could have been predicted, and even solved in the development environment. To register this type of metric, there are many tools that can prove performance in the local environment.

Obviously, the local behavior will not perfectly reflect the production environment. There are many factors to be considered such as network latency, the machine where it was held for deployment and production, and communication with external tools. However, we can take local metrics to highlight a new algorithm or functionality that has compromised the overall performance of the application.

For local application metrics, there are some tools:

  • Apache Benchmark
  • WRK
  • Locust

Each tool has specific features, but all serve the same purpose which is to get metrics about the endpoints.

Apache Benchmark

Apache Benchmark is better known as AB, and that's what we'll call it.

AB runs from the command line and is very useful to prove the speed and response of endpoints.

Running a local performance test is very simple, as can be seen in the following example:

$ ab -c 100 -n 10000 http://localhost:5000/

The preceding command line invokes AB to run 10,000 requests (-n 10000), simulating 100 concurrent users (-c 100), and calling the local route in door 5,000 (http:// localhost: 5000/). The displayed result will be something like the following screenshot:

The result shows the server that performed the processing (Werzeug / 0.12.2), the hostname (localhost), the port (5000), and another set of information.

The most important data generated by AB is:

  • Request per second: 444.99
  • Time per request: 224.722 ms (mean)
  • Time per request: 2.247 ms (mean, across all concurrent requests)

These three pieces of information at the end of the test process indicate the local application performance. As we can see in this case, the application used in this test returns 444.99 requests per second when you have 100 concurrent users and 10,000 charging requests.

Obviously, this is the most simple test scenario that can be done with AB. The tool has a number of other features, such as exporting graphics performance tests performed, and simulating all the verbs that the REST API can run on HTTPS certificates and simulate. These are only a few other attributes that AB offers as a resource.

WRK

Similar to AB, the WRK is also a tool executed by the command line and serves the same purpose as the AB. The following screenshot represents the WRK tool:

To run WRK is also very simple. Just use the following command:

    $ wrk -c 100 -d 10 -t 4 http://localhost:5000/

However, WRK has some different characteristics compared to AB. The preceding command means that WRK will run a performance test for ten seconds (d 10), with 100 concurrent users (c 100), and will request four threads from the operating system for this task (-t 4).

Quickly observing the command line, it can be perceived that there is no limitation or requests load statement to be executed; WRK does not work that way. The WRK test proposed is to perform load stress for a period of time.

Ten seconds after executing the command line that is a little higher, WRK will return the information, shown in the following screenshot:

Clearly, the returned data is more concise. But suffice to know that the behavior before a temporal change is made to our application.

Again, it is good to point out the local test feature and not necessarily the result of WRK is evidence that reflects the reality of an application in production. However, WRK offers good numbers to have a metric of application.

From the data generated by WRK, we can see that after the 10 seconds test with a 100 concurrent users and using four threads, our application in the local environment has the following numbers:

  • Requests/sec: 365.55
  • 268.68ms latency (mean)

The WRK figures are somewhat lower than those provided by AB; this is clearly the result of the test performed by each type of tool.

WRK is very flexible for running tests, including accepting the use of scripts in the Lua programming language to perform some specific tasks.

WRK is one of my favorite tools for local performance tests. The type of test performed by WRK is very close to reality and offers numbers very close to actual results.

Locust

Out of the tools listed as an example for local metrics APIs, Locust has only one visual interface. Another interesting feature is the possibility to prove to multiple endpoints simultaneously.

The Locust's interface is very simple and easy to understand. You can tell how many concurrent users will soon be used in the interface data input. After the start of the process with Locust, the iron fist begins to show the GUI HTTP verb used, the path where the request was directed, the number of requests made during the test, and a series number for the metrics collected from multiple websites.

The GUI can be seen in detail in the following screenshot:

Using the Locust is very simple. The first step is the installation. Unlike AB and WRK, the installation of Locust is done through pypi, the Python installation package. Use the following command:

$ pip install locustio

After installation, you must create a configuration file called locustfile.py with the following contents:

# import all necessary modules to run the Locust:   
 from locust import HttpLocust, TaskSet, task 
 
    # create a class with TaskSet as inheritance 
    class WebsiteTasks(TaskSet): 
     
    # Create all the tasks that will be metrify using the @taks decorator  
    @task 
    # the name function will be the endpoint name in the Locust 
    def index(self): 
        # set to the client the application path with the HTTP
           verb        # in this case "get" 
        self.client.get("/") 
         
        @task 
        def about(self): 
          self.client.get("/about/") 
 
    # create a class setting the main task end the time wait for each request 
      class WebsiteUser(HttpLocust): 
       task_set = WebsiteTasks 
       min_wait = 5000 
       max_wait = 15000  

After the configuration file is created, it is time to run Locust. To do this, it is necessary to use the following command line:

$ locust -f locustfile.py

Locust will provide a URL to access the visual interface. Then the metrics can be verified.

Initially, Locust's configuration may seem more complex than the other applications shown in this section. However, after the initial process, the course of the test is very simple. As in AB and WRK, Locust has many features for deeper testing.

Summary


In this chapter, we learned the importance of making choices about the microservices stack. At first, it may seem very complex to make this kind of decision, but if we have in mind the definitions of the areas we want to develop, this process loses complexity.

We have seen that programming language, frameworks, and databases have defined purposes and we should never disregard this fact. A simple illustration is: trains are not made to fly. This does not mean that any tool is acceptable for any given purpose; it is simply not appropriate for a task for which it was not designed.

We have also seen the importance of caches, established how quick and agile communication between microservices occurs, and the importance of fault alerts in several layers of our microservices.

Finally, we know some tools that help us to prove the performance of microservices still in the local environment.

Armed with the knowledge acquired in this chapter, we are able to move on to the next, and  to create our microservices effectively.

In Chapter 3, Internal Patterns, we'll start to code our first microservice.

 

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • •Get to grips with the microservice architecture and build enterprise-ready microservice applications
  • •Learn design patterns and the best practices while building a microservice application
  • •Obtain hands-on techniques and tools to create high-performing microservices resilient to possible fails

Description

Microservices are a hot trend in the development world right now. Many enterprises have adopted this approach to achieve agility and the continuous delivery of applications to gain a competitive advantage. This book will take you through different design patterns at different stages of the microservice application development along with their best practices. Microservice Patterns and Best Practices starts with the learning of microservices key concepts and showing how to make the right choices while designing microservices. You will then move onto internal microservices application patterns, such as caching strategy, asynchronism, CQRS and event sourcing, circuit breaker, and bulkheads. As you progress, you'll learn the design patterns of microservices. The book will guide you on where to use the perfect design pattern at the application development stage and how to break monolithic application into microservices. You will also be taken through the best practices and patterns involved while testing, securing, and deploying your microservice application. At the end of the book, you will easily be able to create interoperable microservices, which are testable and prepared for optimum performance.

Who is this book for?

This book is for architects and senior developers who would like implement microservice design patterns in their enterprise application development. The book assumes some prior programming knowledge.

What you will learn

  • • How to break monolithic application into microservices
  • • Implement caching strategies, CQRS and event sourcing, and circuit breaker patterns
  • • Incorporate different microservice design patterns, such as shared data, aggregator, proxy, and chained
  • • Utilize consolidate testing patterns such as integration, signature, and monkey tests
  • • Secure microservices with JWT, API gateway, and single sign on
  • • Deploy microservices with continuous integration or delivery, Blue-Green deployment
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 31, 2018
Length: 366 pages
Edition : 1st
Language : English
ISBN-13 : 9781788474030
Concepts :

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Jan 31, 2018
Length: 366 pages
Edition : 1st
Language : English
ISBN-13 : 9781788474030
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$12.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 6,500+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$129.99 billed annually
Feature tick icon Unlimited access to Packt's library of 6,500+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$179.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 6,500+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 152.97
Cloud Native Development Patterns and Best Practices
$48.99
Microservice Patterns and Best Practices
$48.99
Microservices Development Cookbook
$54.99
Total $ 152.97 Stars icon
Visually different images

Table of Contents

13 Chapters
Understanding the Microservices Concepts Chevron down icon Chevron up icon
The Microservice Tools Chevron down icon Chevron up icon
Internal Patterns Chevron down icon Chevron up icon
Microservice Ecosystem Chevron down icon Chevron up icon
Shared Data Microservice Design Pattern Chevron down icon Chevron up icon
Aggregator Microservice Design Pattern Chevron down icon Chevron up icon
Proxy Microservice Design Pattern Chevron down icon Chevron up icon
Chained Microservice Design Pattern Chevron down icon Chevron up icon
Branch Microservice Design Pattern Chevron down icon Chevron up icon
Asynchronous Messaging Microservice Chevron down icon Chevron up icon
Microservices Working Together Chevron down icon Chevron up icon
Testing Microservices Chevron down icon Chevron up icon
Monitoring Security and Deployment Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7
(19 Ratings)
5 star 52.6%
4 star 10.5%
3 star 5.3%
2 star 21.1%
1 star 10.5%
Filter icon Filter
Top Reviews

Filter reviews by




Gabriel Garcia Feb 14, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The writing style and the information provided are always to the point, all while remaining an easy read altogether. I really appreciate the careful use of diagrams and code samples that help in grasping and internalizing the patterns presented.
Amazon Verified review Amazon
Emanuel Barrera Aug 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a beginner in the microservices world, I wanted to deepen my knowledge so that I could apply it in the company I work for. After reading this book, I acquired a critical thought that allowed me to spot some warnings and enhancements that could be applied on the way we build and manage microservices. Really good examples, I liked the way the author explains.
Amazon Verified review Amazon
ebr319 Apr 04, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is going to guide you through all the knowledge you need to create amazing microservices
Amazon Verified review Amazon
Enzo Feb 14, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really like the authors writing style and approach to explain this complex topics. He's doesn't try to fly high, looking at the topics from far away and pretending that with theoretical knowledge is enough on Software Engineering. Instead, he takes the time to walk every step of the software design and architecture process, and gives you the details you'll need when you face common problems. Even more, he gives you common sense-based reasons to use the patterns he proposes and why the 'best practices' are considered such.I had previous experience on Software Engineering, so I can't speak from the point of view of someone that is new to this topics. However, I didn't have any experience with Microservices and when I read the book all the topics were correctly presented for my level of knowledge.
Amazon Verified review Amazon
Oleksandr Jul 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It's really very interesting book with good description of Architecture of microservices. Also good example on Go and Python, you can easily change on Java or C#, no vendor locking. This book structure knowledge about how to build and use microservices best practices. Of course if you expected fully production project with deployment infrastructure and more deeply practical aspects, then only practice will help you.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela