Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - DevOps

47 Articles
article-image-gitlab-new-devops-solution
Erik Kappelman
17 Jan 2018
5 min read
Save for later

GitLab's new DevOps solution

Erik Kappelman
17 Jan 2018
5 min read
Can it be real? The complete DevOps toolchain integrated into one tool, one UI and one process? GitLab seems to think so. GitLab has already made huge strides in terms of centralizing the DevOps process into a single tool. Up until now, most of the focus has been on creating a seamless development system and operations have not been as important. What’s new is the extension of the tool to include the operating side of DevOps as well as the development side. Let's talk a little bit about what DevOps is in order to fully appreciate the advances offered by GitLab. DevOps is basically a holistic approach to software development, quality assurance, and operations. While each of these elements of software creation is distinct, they are all heavily reliant on the other elements to be effective. The DevOps approach is to acknowledge this interdependence and then try to leverage the interdepence to increase productivity and to enhance the final user experience. Two of the most talked about elements of DevOps are continous integration and continuous deployment. Continuous integration and deployment Continuous integration and deployment are aimed at continuously integrating changes to a codebase, potentially from multiple sources, and then continuously deploying these changes into production. These tools require a pretty sophisticated automation and testing framework in order to be really effective. There are plenty of tools for one or the other, but the notion behind GitLab is essentially that if you can affect both of these processes from the same UI, these processes would be that much more efficient. GitLab has shown this to be true.  There is also the human side to consider, that is, coming up with what tasks need to be performed, assigning these tasks to developers and monitoring their progress. GitLab offers tools that help streamline this process as well. You can track issues, create issue boards to organize workflow and these issue boards can be sliced a number of different ways so that most imaginable human organizational needs can be met. Monitoring and delivery So far, we’ve seen that DevOps is about bringing everything together into a smooth process, and GitLab wants that process to occur in one place. GitLab can help you from planning to deployment and everywhere in between. But, GitLab isn’t satisfied with stopping at deployment, and they shouldn’t be. When we think about the three legs of DevOps, development, operations, and quality assurance and testing, what I’ve said about GitLab really only applies to the development leg. This is an unfortunately common problem with DevOps tools and organizational strategies. They seem to cater to developers and basically no one else. Maybe devs complain the most, I don’t know. GitLab has basically solved the DevOps problems between planning and deployment and, naturally, wants to move on to the monitoring and delivery of applications. This is a really exciting direction. After all, software is ultimately about making things happen. Sometimes it's easy to lose sight of this and only focus on the tools that make the software. It is sometimes tempting to view software development as being inherently important, but it's really not; it's a process of making stuff for people to use. If you get too far away from that truth, things can get sticky. I think this is part of the reason the Ops side of DevOps is often overlooked. Operations is concerned with managing the software out there in the wild. This includes dealing with network and hardware considerations and end users. GitLab wants operations to take place using the same UI as development. And why not? It’s the same application isn’t it? And in addition to technical performance, what about how the users are interacting with the application? If the application is somehow monetized, why shouldn’t that information also be available in the same UI as everything else having to do with this application? Again, it's still the same application. One tool to rule them all If you take a minute to step back and appreciate the vision of GitLab’s current direction, I think you can see why this is so exciting. If GitLab is successful in the long-term of extending out their reach into every element of an application's lifecycle including user interactions, productivity would skyrocket.  This idea isn’t really new. The ‘one tool to rule them all’ isn’t even that imaginative of a concept. It's just that no one has ever really created this ‘one tool.’ I believe we are about to enter, or have already entered, a DevOps space race. I believe GitLab is comfortably leading the pack, but they will need to keep working hard if they want it to stay that way. I believe we will be getting the one tool to rule them all, and I believe it is going to be soon. The way things are looking, GitLab is going to be the one to bring it to us, but only time will tell. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 5002

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 4788

article-image-building-docker-images-using-dockerfiles
Aarthi Kumaraswamy
12 Apr 2018
8 min read
Save for later

Building Docker images using Dockerfiles

Aarthi Kumaraswamy
12 Apr 2018
8 min read
Docker images are read-only templates. They give us containers during runtime. Central to this is the concept of a 'base image'. Layers then sit on top of this base image. For example, you might have a base image of Fedora or Ubuntu, but you can then install packages or make modifications over the base image to create a new layer. The base image and new layer can then be treated as a completely  new image. In the image below, Debian is the base image and emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily: Source: Docker Image layers Layers are transparently laid on top of the base image to create a single coherent filesystem. There are a couple of ways to create images, one is by manually committing layers and the other way is through Dockerfiles. In this recipe, we'll create images with Dockerfiles. Dockerfiles help us in automating image creation and getting precisely the same image every time we want it. The Docker builder reads instructions from a text file (a Dockerfile) and executes them one after the other in order. It can be compared as Vagrant files, which allows you to configure VMs in a predictable manner. Getting ready A Dockerfile with build instructions. Create an empty directory: $ mkdir sample_image $ cd sample_image Create a file named Dockerfile with the following content: $ cat Dockerfile # Pick up the base image FROM fedora # Add author name MAINTAINER Neependra Khare # Add the command to run at the start of container CMD date How to do it… Run the following command inside the directory, where we created Dockerfile to build the image: $ docker build . We did not specify any repository or tag name while building the image. We can give those with the -toption as follows: $ docker build -t fedora/test . The preceding output is different from what we did earlier. However, here we are using a cache after each instruction. Docker tries to save the intermediate images as we saw earlier and tries to use them in subsequent builds to accelerate the build process. If you don't want to cache the intermediate images, then add the --no-cache option with the build. Let's take a look at the available images now: How it works… A context defines the files used to build the Docker image. In the preceding command, we define the context to the build. The build is done by the Docker daemon and the entire context is transferred to the daemon. This is why we see the Sending build context to Docker daemon 2.048 kB message. If there is a file named .dockerignore in the current working directory with the list of files and directories (new line separated), then those files and directories will be ignored by the build context. More details about .dockerignore can be found at https://docs.docker.com/reference/builder/#the-dockerignore-file. After executing each instruction, Docker commits the intermediate image and runs a container with it for the next instruction. After the next instruction has run, Docker will again commit the container to create the intermediate image and remove the intermediate container created in the previous step. For example, in the preceding screenshot, eb9f10384509 is an intermediate image and c5d4dd2b3db9 and ffb9303ab124 are the intermediate containers. After the last instruction is executed, the final image will be created. In this case, the final image is 4778dd1f1a7a: The -a option can be specified with the docker images command to look for intermediate layers: $ docker images -a There's more… The format of the Dockerfile is: INSTRUCTION arguments Generally, instructions are given in uppercase, but they are not case sensitive. They are evaluated in order. A # at the beginning is treated like a comment. Let's take a look at the different types of instructions: FROM: This must be the first instruction of any Dockerfile, which sets the base image for subsequent instructions. By default, the latest tag is assumed to be: FROM  <image> Alternatively, consider the following tag: FROM  <images>:<tag> There can be more than one FROM instruction in one Dockerfile to create multiple images. If only image names, such as Fedora and Ubuntu are given, then the images will be downloaded from the default Docker registry (Docker Hub). If you want to use private or third-party images, then you have to mention this as follows:  [registry_hostname[:port]/][user_name/](repository_name:version_tag) Here is an example using the preceding syntax: FROM registry-host:5000/nkhare/f20:httpd MAINTAINER: This sets the author for the generated image, MAINTAINER <name>. RUN: We can execute the RUN instruction in two ways—first, run in the shell (sh -c): RUN <command> <param1> ... <pamamN> Second, directly run an executable: RUN ["executable", "param1",...,"paramN" ] As we know with Docker, we create an overlay—a layer on top of another layer—to make the resulting image. Through each RUN instruction, we create and commit a layer on top of the earlier committed layer. A container can be started from any of the committed layers. By default, Docker tries to cache the layers committed by different RUN instructions, so that it can be used in subsequent builds. However, this behavior can be turned off using --no-cache flag while building the image. LABEL: Docker 1.6 added a new feature to the attached arbitrary key-value pair to Docker images and containers. We covered part of this in the Labeling and filtering containers recipe in Chapter 2, Working with Docker Containers. To give a label to an image, we use the LABEL instruction in the Dockerfile as LABEL distro=fedora21. CMD: The CMD instruction provides a default executable while starting a container. If the CMD instruction does not have an executable (parameter 2), then it will provide arguments to ENTRYPOINT. CMD  ["executable", "param1",...,"paramN" ] CMD ["param1", ... , "paramN"] CMD <command> <param1> ... <pamamN> Only one CMD instruction is allowed in a Dockerfile. If more than one is specified, then only the last one will be honored. ENTRYPOINT: This helps us configure the container as an executable. Similar to CMD, there can be at max one instruction for ENTRYPOINT; if more than one is specified, then only the last one will be honored: ENTRYPOINT  ["executable", "param1",...,"paramN" ] ENTRYPOINT <command> <param1> ... <pamamN> Once the parameters are defined with the ENTRYPOINT instruction, they cannot be overwritten at runtime. However, ENTRYPOINT can be used as CMD, if we want to use different parameters to ENTRYPOINT. EXPOSE: This exposes the network ports on the container on which it will listen at runtime: EXPOSE  <port> [<port> ... ] We can also expose a port while starting the container. We covered this in the Exposing a port while starting a container recipe in Chapter 2, Working with Docker Containers. ENV: This will set the environment variable <key> to <value>. It will be passed all the future instructions and will persist when a container is run from the resulting image: ENV <key> <value> ADD: This copies files from the source to the destination: ADD <src> <dest> The following one is for the path containing white spaces: ADD ["<src>"... "<dest>"] <src>: This must be the file or directory inside the build directory from which we are building an image, which is also called the context of the build. A source can be a remote URL as well. <dest>: This must be the absolute path inside the container in which the files/directories from the source will be copied. COPY: This is similar to ADD.COPY <src> <dest>: COPY  ["<src>"... "<dest>"] VOLUME: This instruction will create a mount point with the given name and flag it as mounting the external volume using the following syntax: VOLUME ["/data"] Alternatively, you can use the following code: VOLUME /data USER: This sets the username for any of the following run instructions using the following syntax: USER  <username>/<UID> WORKDIR: This sets the working directory for the RUN, CMD, and ENTRYPOINT instructions that follow it. It can have multiple entries in the same Dockerfile. A relative path can be given which will be relative to the earlier WORKDIR instruction using the following syntax: WORKDIR <PATH> ONBUILD: This adds trigger instructions to the image that will be executed later, when this image will be used as the base image of another image. This trigger will run as part of the FROM instruction in downstream Dockerfile using the following syntax: ONBUILD [INSTRUCTION] See also Look at the help option of docker build: $ docker build -help The documentation on the Docker website https://docs.docker.com/reference/builder/ You just enjoyed an excerpt from the book, DevOps: Puppet, Docker, and Kubernetes by Thomas Uphill, John Arundel, Neependra Khare, Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu. To master working with Docker containers, images and much more, check out this book today! Read other posts: How to publish Docker and integrate with Maven Building Scalable Microservices How to deploy RethinkDB using Docker  
Read more
  • 0
  • 0
  • 4741
Visually different images

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 4720

article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 4560

article-image-introduction-devops
Packt
12 Jan 2016
7 min read
Save for later

Introduction to DevOps

Packt
12 Jan 2016
7 min read
In this article by Joakim Verona, the author of the book Practical DevOps, we will be introduced to DevOps. An important part of DevOps is being able to explain to coworkers in your organization what DevOps is, and what it isn't. The faster you can get everyone aboard the DevOps train, the faster you can get to the part where you do the actual technical implementation! (For more resources related to this topic, see here.) Introducing DevOps DevOps is, by definition, a field that spans several disciplines. It is a field that is very practical and hands-on, but at the same time, you must understand both the technical background and the non-technical cultural aspects. The word DevOps is a combination of the words development and operation. This wordplay already serves to give us a hint of the basic nature of the idea behind DevOps. It is a practice where collaboration between different disciplines of software development is encouraged. The DevOps movement has its roots in Agile software development principles. The Agile Manifesto was written in 2001 by a number of individuals wanting to improve the then-current status quo of system development and find new ways of working in the software development industry. The following is an excerpt from the Agile Manifesto, the now-classic text, which is available on the web at http://agilemanifesto.org/: "Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more." In light of this, DevOps can be said to relate to the first principle, "Individuals and interactions over processes and tools". This might be seen as a fairly obviously beneficial way to work—why do we even have to state this obvious fact? Well, if you have ever worked in any large organization, you will know that the opposite principle seems to be in operation instead. Walls between different parts of an organization tend to form easily, even in smaller organizations where at first it would appear to be impossible for such walls to form. DevOps, then, tends to emphasize that interactions between individuals are very important, and that technology might possibly assist in making these interactions happen and tear down the walls inside organizations. This might seem counterintuitive given that the first principle favors interaction between people over tools, but the authors' opinion is that any tool can have several effects when used. If we use the tools properly, they can facilitate all of the desired properties of an Agile workplace. A very simple example might be the choice of systems used to report bugs. Quite often, development teams and quality assurance teams use different systems to handle tasks and bugs. This creates unnecessary friction between the teams and further separates them when they should really focus on working together instead. The operations team might in turn use a third system to handle requests for deployment to the organization's servers. An engineer with a DevOps mindset, on the other hand, will immediately recognize all three systems as being workflow systems with similar properties. It should be possible for everyone in the three different teams to use the same system, perhaps tweaked to generate different views for the different roles. A further benefit would be smaller maintenance costs, since three systems are replaced by one. Another core goal of DevOps is automation and continuous delivery. Simply put, automating repetitive and tedious tasks leaves more time for human interaction, where true value can be created. How fast is fast? The turnaround for DevOps processes must be fast. We need to consider time to market in the larger perspective, and simply stay focused on our tasks in the smaller perspective. This line of thought is also held by the Continuous Delivery movement. As with many things Agile, many of the ideas in DevOps and Continuous Delivery are in fact different names of the same basic concepts. There really isn't any contention between the two concepts; DevOps and Continuous Delivery are two sides to the same coin. DevOps engineers work on making enterprise processes faster, more efficient, and more reliable. Repetitive manual labor, which is error prone, is removed whenever possible. It's easy, however, to lose track of the goal when working with DevOps implementations. Doing nothing faster is of no use to anyone. Instead, we must keep track of delivering increased business value. For instance, increased communication between roles in the organization has clear value. Your product owners might be wondering how the development process is going and are eager to have a look. In this situation, it is useful to be able to deliver incremental improvements of code to the test environments quickly and efficiently. In the test environment, involved stake holders, such as product owners and, of course, the quality assurance teams, can follow the progress of the development process. Another way to look at it is this: If you ever feel yourself losing focus because of needless waiting, something is wrong with your processes or your tooling. If you find yourself watching videos of robots shooting balloons during compile time, your compile times are too long! The same is true for teams idling while waiting for deploys and so on. This idling is, of course, even more expensive than that of a single individual. While robot shooting-practice videos are fun, software development is inspiring too! We should help focus creative potential by eliminating unnecessary overhead. The Agile wheel of wheels There are several different cycles in Agile development, from the portfolio level through to the Scrum and Kanban cycles and down to the continuous integration cycle. The emphasis on which cadence work happens in is a bit different depending on which Agile framework you are working with. Kanban emphasizes the 24-hour cycle and is popular in operations teams. Scrum cycles can be between two to four weeks and are often used by development teams using the Scrum Agile process. Longer cycles are also common and are called program increments, which span several Scrum Sprint cycles, in Scaled Agile Framework. The Agile wheel of wheels DevOps must be able to support all these cycles. This is quite natural given the central theme of DevOps: cooperation between disciplines in an Agile organization. The most obvious and measurably concrete benefits of DevOps occur in the shorter cycles, which in turn make the longer cycles more efficient. Take care of the pennies, and the pounds will take care of themselves, as the old adage goes. Here are some examples of when DevOps can benefit Agile cycles: Deployment systems, maintained by DevOps engineers, make the deliveries at the end of Scrum cycles faster and more efficient. These can happen with a periodicity of 2 to 4 weeks. In organizations where deployments are done mostly by hand, the time to deploy can be several days. Organizations who have these inefficient deployment processes will benefit greatly from a DevOps mindset. The Kanban cycle is twenty-four hours, and it's therefore obvious that the deployment cycle needs to be much faster than that if we are to succeed with Kanban. A well-designed DevOps Continuous Delivery pipeline can deploy code from being committed to the code repository to production in the order of minutes, depending on the size of the change. Summary This article presented a brief overview of the background of the DevOps movement. We discussed the history of DevOps and its roots in development and operations as well as in the Agile movement. Resources for Article: Further resources on this subject: Command Line Tools for DevOps [article] What is continuous delivery and DevOps? [article] Continuous Delivery and DevOps FAQs [article]
Read more
  • 0
  • 0
  • 4391
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-devops-mistakes-which-developers-should-avoid
Guest Contributor
16 Dec 2019
9 min read
Save for later

DevOps mistakes which developers should avoid!

Guest Contributor
16 Dec 2019
9 min read
DevOps is becoming recognized as a vital pillar of digital transformation. Because of this, CIOs are becoming enthusiastic regarding how DevOps and open source can completely transform the enterprise culture. All organizations want to succeed and reach their development goals across all projects. However, in reality, the entire journey is not at all easy as it seems, and it often requires collective efforts and time. In this entire journey, there are some common failures which teams are likely to come across. In this post, we’ll discuss DevOps mistakes which everyone should know and must avoid. Before that, it is necessary to understand the importance of DevOps in today’s world. Importance of DevOps in today’s world DevOps often describes a culture and a set of processes which brings development and operations together to complete software development. It enables organizations not just to create but also improve products at a faster pace than they can with some approaches to software development. DevOps adoption rate is increasing by each passing day. According to Statista many business organizations are shifting towards the DevOps culture and there is an increase of 17% in 2018 from the previous year.  DevOps culture is instrumental in today’s world. The following points briefly highlight the need for DevOps in this era. It reduces costs and other IT stuff. Results in greater competencies. It provides better communication and cooperation opportunities. The development cycle is fast and innovative. Deployment failures are reduced to a great extent. Eight DevOps Mistakes Many people still don’t fully understand what DevOps means. Without prior knowledge and understanding, many DevOps initiatives fail to get off the ground successfully. Following is a brief description of DevOps’ mistakes, and how they can be avoided to start a successful DevOps journey. 1. Rigid DevOps Process Compliance with core DevOps tenets is vital for DevOps success; organizations have to make adjustments in active response to meet organization demands. Enterprises have to make sure that while the main DevOps pillars remain stable while implementing DevOps; they make the internal adjustments needed in internal benchmarking of the expected consequences. Instrumenting codebases in a gritty manner and making them more and more partitioned results in more flexibility and provide DevOps team the ultimate power to backtrack and recognize the root cause of diversion in the event of failed outcomes. But, all adjustments have to be made while staying within the boundaries defined by DevOps. 2. Oversimplification of the process Indeed DevOps is a complex process. To implement DevOps, enterprises often go on a DevOps Engineer hiring spree or at times, create a new and isolated one. The DevOps department is then responsible for managing the DevOps framework and strategy, and it needlessly adds new processes that are often lengthy and complicated. Instead of creating an isolated DevOps department, organizations should focus on optimizing their processes to make operational products that leverage the right set of resources. For successful implementation of DevOps, organizations must be capable enough to manage the DevOps framework, leverage functional experts, and other resources that can manage DevOps related tasks like budgeting goals, resource management, and process tracking. DevOps requires a cultural overhaul. Organizations must consider a phased and measured transition to DevOps implementation by educating and training employees on these new processes. Also, they should have the right frameworks to enable careful collaboration. 3. Not preparing for a cultural change When you have the right tools for DevOps practices, you likely might come across a new challenge. The challenge will be trying to make your teams use the tools for fast development, continuous delivery, automated testing, and monitoring. Is your DevOps culture ready for all such things? For example, agile methodologies usually mandate that you ship new code once a week, or once a day. It results in the failure of agile methods. You might also face the same conceptual issues with DevOps. It can be like pulling on a smooth road with a car with no fuel in it. To prevent this situation, plan for a transition period. Leave enough time for the development and operational team to get used to new practices. Also, make sure that they have a chance to gain experience with the new processes and tools. Ensure that before adopting DevOps, you've got a matured Dev and Ops culture. 4. Creating a single DevOps team The most common mistake which most organizations and enterprises make is to create a brand-new team and task them with addressing all the burdens of a DevOps initiative. It is challenging and complicated for both development and operations to deal with a new group that coordinates with everyone. DevOps started with the idea of enhancing collaborations between teams involved in the development of software like security, DBMS, and QA. However, it is not only about development and operations. If you create a new side to address DevOps, you’re making things more complicated. The secret ingredient here is simplicity. Focus on culture by encouraging a mindset of automation, quality, and stability. For instance, you might involve everyone in a conversation regarding your architecture, or about some problems found in production environments in which all the relevant players need to be well aware of how their work influences others. “DevOps is not about a single dedicated team but about organizations that progress together as a DevOps team.” 5. Not including the security team DevOps is about more than merely putting the development and operations teams together. It is a continuous process of automation and software development, including audit, compliance, and security. Many organizations make the mistake of not following their security practices in advance. According to a CA Technologies survey, security concerns were the number-one obstacle to DevOps as cited by 38% of the respondents. Similarly, the Puppet survey found that high-performing DevOps teams spend 50% less time remediating security issues than low performers. These high performing teams found different ways to communicate their security objectives and to establish security in the early phases of their development process. All DevOps practitioners should evaluate the controls, recognize the risks, and understand the processes. In the end, security is always an integral part of DevOps practices, such as DevSecOps (a practice in which development and operations is integrated with security). For example, if you have some security issues in production, you can address them within your DevOps pipeline through the tools which the security team already uses. DevOps and security practices should be followed strictly, and there should be no compromises. Moreover, other measures should be adopted to avoid cyber-criminals invading the DevOps culture. Invest in cybersecurity markets has become a necessity to avoid situations where attacker can carry out attacks like that of spear phishing and phishing. It is found that out of all attacks on various organizations, 95% of them were a result of spear phishing. 6. Incorrect use of incident management DevOps teams must have a robust incident management process in place. The incident management needs to be utterly proactive and an ongoing process. It means that having a documented incident management process is imperative to define the incident responses. For example, a total downtime event will have a different response workflow in comparison to a minor latency problem. The failure to not do so can often lead to missed timelines and preventable projects delay. 7. Not utilizing purposeful automation DevOps needs organizations to adopt and implement purposeful automation. For DevOps, it is essential to take automation across the complete development lifecycle. It includes continuous delivery, continuous integration, and deployment for velocity and quality outcomes. Purposeful end-to-end automation is a crucial successful DevOps implementation. Therefore, organizations should look at the complete automation of the CI and CD pipeline. However, at the same time, organizations need to identify various opportunities for automation across functions and processes. This helps to reduce the need for manual handoffs for complicated integrations that need new management in multiple format deployments. Editor’s Note: Do you use or plan to use Azure for DevOps? If you want to know all about Azure DevOps services, we recommend our latest cookbook, ‘Azure DevOps Server 2019 Cookbook - Second Edition’ written by Tarun Arora and Utkarsh Shigihalli. The recipes in this book will help you achieve skills you need to break down the invisible silos between your software development teams and transform them into a modern cross-functional software development team. 8. Wrong-way to measure project success DevOps promises for faster delivery. But, if that acceleration comes at the cost of quality, then the DevOps program is a failure. Enterprises looking at deploying DevOps should use the right metrics to understand project growth and success. For this reason, it is imperative to consider metrics that align velocity with success. Do focus on the right parameters as it is essential to drive intelligent automation decisions. Conclusion Now organizations are rapidly running towards DevOps to stand with competition and become successful but they often make big mistakes. There are mistakes that people commit while implementing a DevOps culture. However, all these mistakes are avoidable, and hopefully, the points mentioned above have successfully cleared your vision to a great extent. After you overcome all the mistakes and adopt DevOps practices, your organization will surely enjoy improved client satisfaction, and employee morale increased productivity, and agility- all of which helps in growing your business. If you plan to accelerate deployment of high-quality software by automating build and releases using CI/CD pipelines in Azure, we suggest you to check out Azure DevOps Server 2019 Cookbook - Second Edition, which will help you create and release extensions to the Azure DevOps marketplace and reach the million-strong developer ecosystem for feedback. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. Abel Wang explains the relationship between DevOps and Cloud-Native Can DevOps promote empathy in software engineering? 7 crucial DevOps metrics that you need to track
Read more
  • 0
  • 0
  • 4315

article-image-does-it-make-sense-to-talk-about-devops-engineers-or-devops-tools
Richard Gall
10 May 2019
6 min read
Save for later

Does it make sense to talk about DevOps engineers or DevOps tools?

Richard Gall
10 May 2019
6 min read
DevOps engineers are in high demand - the job represents an engineering unicorn, someone that understands both development and operations and can help to foster a culture where the relationship between the two is almost frictionless. But there's some debate as to whether it makes sense to talk about a DevOps engineer at all. If DevOps is a culture of a set of practices that improves agility and empowers engineers to take more ownership over their work, should we really be thinking about DevOps as a single job that someone can simply train for? The quotes in this piece are taken from DevOps Paradox by Viktor Farcic, which will be published in June 2019. The book features interviews with a diverse range of figures drawn from across the DevOps world. Is DevOps engineer a 'real' job or just recruitment spin? Nirmal Mehta (@normalfaults), Technology Consultant at Booz Allen Hamilton, says "There's no such thing as a DevOps engineer. There shouldn't even be a DevOps team, because to me DevOps is more of a cultural and philosophical methodology, a process, and a way of thinking about things and communicating within an IT organization..." Mehta is cyncical about organizations that put out job descriptions asking for DevOps engineers. It is, he argues, a way of cutting costs - a way of simply doing more with less. "A DevOps engineer is just a job posting that signals an organization wants to hire one less person to do twice as much work rather than hire both a developer and an operator." This view is echoed by other figures associated with the DevOps world. Mike Kail (@mdkail), CTO at Everest, says "I certainly don't view DevOps as a tool or a job title. In my view, at the core, it's a cultural approach to leveraging automation and orchestration to streamline both code development, infrastructure, application deployments and subsequently, the managing of those resources." Similarly, Damian Duportal (@DamienDuportal), Træfik's Developer Advocate, says "there is no such thing as a DevOps engineer or even a DevOps team.  The main purpose of DevOps is to focus on value, finding the optimal for the organization, and the value it will bring." For both Duportal and Kail, then, DevOps is primarily a cultural thing, something which needs to be embedded inside the practices of an organization. Is it useful to talk about a DevOps team? There are big question marks over the concept of a DevOps engineer. But what about a specific team? It's all well and good talking about organizational philosophy, but how do you actually affect change in a practical manner? Julian Simpson (@builddoctor), Neo4J's Global IT Manager is sceptical about the concept of a DevOps team: “Can we have something called a DevOps team? I don't believe so. You might spin up a team to solve a DevOps problem, but then I wouldn't even say we specifically have a DevOps problem. I'd say you just have a problem." DevOps consultant Chris Riley (@HoardingInfo) has a similar take, saying: “DevOps Engineer as a title makes sense to me, but I don't think you necessarily have DevOps departments, nor do you seek that out. Instead, I think DevOps is a principle that you spread throughout your entire development organization. Rather, you look to reform your organization in a way that supports those initiatives versus just saying that we need to build this DevOps unit, and there we go, we're done, we're DevOps. Because by doing that you really have to empower that unit and most organizations aren't willing to do that." However, Red Hat Solutions Architect Wian Vos  (@wianvos) has a different take. For Vos the idea of a DevOps team is actually crucial if you are to cultivate a DevOps mindset inside your organization: "Imagine... you and I were going to start a company. We're going to need a DevOps team because we have a burning desire to put out this awesome application. The questions we have when we're putting together a DevOps team is both ‘Who are we hiring?’ and ‘What are we hiring for? Are we going to hire DevOps engineers? No. In that team, we want the best application developers, the best tester, and maybe we want a great infrastructure guy and a frontend/backend developer. I want people with specific roles who fit together as a team to be that DevOps team." For Vos, it's not so much about finding and hiring DevOps engineers - people with a specific set of skills and experience - but rather building a team that's constructed in such a way that it can put DevOps principles into practice. Is there such a thing as a DevOps tool? One of the interesting things about DevOps is that the debate seems to lead you into a bit of a bind. It's almost as if the more concrete we try and make it - turning it into a job, or a team - the less useful it becomes. This is particularly true when we consider tooling. Surely thinking about DevOps technologically, rather than speculatively makes it more real? In general, it appears there is a consensus against the idea of DevOps tools. On this point Julian Simpson said "my original thinking about the movement from 2009 onwards, when the name was coined, was that it would be about collaboration and perhaps the tools would sort of come out of that collaboration." James Turnbull (@kartar), CEO of Rethink Robotics is critical of the notion of DevOps tools. He says "I don't think there are such things as DevOps tools. I believe there are tools that make the process of being a cross-functional team better... Any tool that facilitates building that cross-functionality is probably a DevOps tool to the point where the term is likely meaningless." When it comes to DevOps, everyone's still learning With even industry figures disagreeing on what terms mean, or which ones are relevant, it's pretty clear that DevOps will remain a field that's contested and debated. But perhaps this is important - if we expect it to simply be a solution to the engineering challenges we face, it's already failed as a concept. However, if we understand it as a framework or mindset for solving problems then that is when it acquires greater potency. Viktor Farcic is a Developer Advocate at CloudBees, a member of the Google Developer Experts and Docker Captains groups, and published author. His big passions are DevOps, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
Read more
  • 0
  • 0
  • 4124

article-image-the-accelerate-state-of-devops-2019-report-key-findings-scaling-strategies-and-proposed-performance-productivity-models
Vincy Davis
28 Aug 2019
8 min read
Save for later

The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance &amp; productivity models

Vincy Davis
28 Aug 2019
8 min read
The State of DevOps report is a widely referenced body of DevOps research which helps organizations achieve high organizational performance and productivity. The DORA (DevOps Research and Assessment) team have published the 2019 Accelerate State of DevOps Report which is an independent review of the practices and capabilities of DevOps. Teams across the globe can take advantage of the survey findings and identify specific capabilities to improve their software delivery performance. Key findings in DevOps 2019 report Increase in Elite performers The DevOps 2019 report uses a cluster analysis method to identify the software delivery performance of the participants involved in the survey. All the respondents are categorized into four distinct groups called as Elite, High, Medium and Low Performers. The report states, “In this approach, those in one group are statistically similar to each other and dissimilar from those in other groups, based on our performance behaviors of throughput and stability: deployment frequency, lead time, time to restore service, and change fail rate.” According to the survey, the proportion of elite performers have jumped to 20% from the 7% last year. The percentage of medium performers have also increased, leading to a drop in the percentage of low performers. Hence, the cluster analysis method depicts a continued shift in the industry, as organizations continue to transform their technology. Image Source: 2019 Accelerate State of DevOps Report Read Also: Does it make sense to talk about DevOps engineers or DevOps tools? Strategies for scaling DevOps in organizations The Accelerate State of DevOps 2019 report have framed out several strategies for scaling DevOps in an organization. The strategies are created based on commonly used approaches observed by the DORA team across the industry. Training Center (DOJO): Employees are taken out of their usual work routines to learn new tools or technologies. Next, they are required to implement the new learnt methods in their work and also inspire others to do so. Center of Excellence: This strategy uses all the available skills for consultation purpose. Proof of Concept but Stall: A central team is given the freedom to build in any possible way (often by breaking organizational norms). However, the effort stalls after the PoC. Proof of Concept as a Template: It begins with the PoC but Stall project which is then replicated in other groups using the same pattern. Proof of Concept as a Seed: This approach ensures that the PoC members stay active in the new groups indefinitely or just as long to ensure that the new practices are sustainable. Communities of Practice: Groups sharing the same common interests in tooling, language, or methodologies are encouraged within an organization to share knowledge and expertise with each other and across teams. Big Bang: When an entire organization is transformed as per DevOps methodologies at once. Bottom-up or Grassroots: Small teams are put together to transform resources and share their success throughout the organization in an informal way. Mashup: When an organization implements several of the above described approaches with partial execution or with insufficient resources. The distribution of DevOps transformation strategies as per performance profiles are shown below. Image Source: 2019 Accelerate State of DevOps Report Cloud computing usage is the driving force behind elite performers The DORA team uses the five essential characteristics of cloud computing, as defined by the National Institute of Standards and Technology (NIST) to understand cloud usage patterns by the categorized performers. The essential characteristics of cloud computing are given below. On-demand self-service: DevOps users can use the cloud on an on-demand self-service premise to use the computing resources whenever needed, without any human intervention. This is the least used characteristic by users with only 57% of the survey respondents agreeing  on using it. Broad network access: Such networks can be used through heterogeneous platforms such as mobile phones, tablets, laptops, and workstations. 60% of the survey respondents agreed on using the broad network access. Resource pooling: The provider resources in a cloud are pooled in a multi-tenant model such that the physical and virtual resources are dynamically assigned on-demand. This allows the  customer to specify a location at a higher level of abstraction such as country, state, or datacenter. Among all the survey respondents, 58% of them agreed on using the resource pooling characteristics. Rapid elasticity: The cloud capabilities are elastically provisioned to the extent that it can be released rapidly to scale outward or inward on demand. Hence, the cloud capabilities seem to be unlimited and can be appropriated at any point of time in any quantity. When compared to 2018, this characteristic saw a growth of +15% this year, with 58% of the survey respondents utilizing it. Measured service: This is the most used characteristic of cloud computing as 62% of the respondents agreed on using the cloud measured service. It allows the cloud systems to be automatically used to control, optimize, and report resource usage based on the type of services like storage, processing, bandwidth, and active user accounts. The DevOps 2019 survey found that only 29% of all the respondents agreed on utilizing all the essential cloud computing characteristics. The survey also revealed that the elite performers in the survey used the cloud characteristics 24% times more than the low performers. Importance of psychological safety to SDO performance The DevOps survey found that a culture of psychological safety where team members feel safe to take risks and be vulnerable in front of each other are able to better deliver software delivery performance, organizational performance, and high productivity. This enables an organization to come up with new products and features without impacting the existing users, hence leading to desirable levels of software delivery and operational performance (SDO performance). The report concludes that due to psychological safety elite performers can twice achieve or exceed their organizational performance goals. Size of an industry does not correspond to its success This year, the retail industry saw significantly better SDO performance, in terms of speed and stability of software delivery. However, the DevOps report states that, “We found no evidence that industry has an impact with the exception of retail, suggesting that organizations of all types and sizes, including highly regulated industries such as financial services and government, can achieve high levels of performance.” The report suggests that size is not a factor for poor performance of other industries and high levels of performance can be achieved by adopting DevOps practices. Read Also: Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Proposed steps to improve performance and productivity in DevOps The Accelerate State of DevOps Report of 2019 proposes two research models to improve DevOps performance and productivity. Performance model The DevOps report states, “A key goal in digital transformation is optimizing software delivery performance.” To improve software delivery and operational (SDO) performance and organizational performance, teams can first start with basic automation such as version control and automated testing, monitoring, clear change approval processes, and a healthy culture. The DevOps 2019 survey finds that low performers use more proprietary software than high and elite performers. Image Source: 2019 Accelerate State of DevOps Report Productivity model The report defines productivity as “the ability to get complex, time-consuming tasks completed with minimal distractions and interruptions.” Teams can use the useful and easy-to-use tools along with internal and external search to accelerate productivity. The DevOps survey reveals that the highest performing engineers are 1.5 times more likely to use the easy-to-use tools. An improved amount of productivity also helps employees have a better work/life balance and less burnout. Image Source: 2019 Accelerate State of DevOps Report DevOps teams can use the above two models to locate their goal, identify the dependent factors and increase their overall performance and productivity output. Demographic makeup of the DevOps 2019 survey This year, the DORA survey had almost 1,000² participants. Among all the responses, 26% responses came from employees working in very large companies (10,000+). Compared to last year, there has been a drop in response from employees working in companies with 500-1,999 employees. In contrast, more responses have been received from people working in company sizes of 100-499 employees. The majority of participants worked in the Development or Engineering department. The DevOps or Site Reliability Engineering (SRE) department came next, followed by Manager, IT Operations or Infrastructure, and others. Half of the participants in the 2019 research are from North America, followed by EU/ UK at 29%. This year also saw a fall in responses from Asia, which is only 9% when compared to the 18% last year. The percentage of women on teams have also reduced to 16% (median) from the 25% reported last year. Image Source: 2019 Accelerate State of DevOps Report Developers across the world love the Accelerate State of DevOps 2019 report and are thanking the DORA team for major takeaways like cloud being a key differentiator, how to be a elite performer in software delivery performance, and more. https://twitter.com/kniklas/status/1165681512538398720 https://twitter.com/nickj69/status/1164707063907225600 https://twitter.com/DawieO/status/1164703456675819522 https://twitter.com/kylekyle/status/1164692941559877632 https://twitter.com/tottiLFC/status/1164800298885402624 https://twitter.com/mirko_novakovic/status/1164606178615341060 Here’s a two minute summary video of the Accelerate State of DevOps 2019 report, published by Google Cloud. https://www.youtube.com/watch?v=8M3WibXvC84 Interested readers can check out the report to see a detailed comparison of tool usage, according to low, medium, high, and elite profiles. Also, you can read the full 2019 Accelerate State of DevOps Report for more information. 5 reasons poor communication can sink DevSecOps 7 crucial DevOps metrics that you need to track Introducing kdevops, a modern DevOps framework for Linux kernel development
Read more
  • 0
  • 0
  • 3964

article-image-start-treating-your-infrastructure-code
Packt
26 Dec 2016
18 min read
Save for later

Start Treating your Infrastructure as Code

Packt
26 Dec 2016
18 min read
In this article by Veselin Kantsev, the author of the book Implementing DevOps on AWS. Ladies and gentlemen, put your hands in the atmosphere for Programmable Infrastructure is here! Perhaps Infrastructure-as-Code(IaC) is not an entirely new concept considering how long Configuration Management has been around. Codifying server, storage and networking infrastructure and their relationships however is a relatively recent tendency brought about by the rise of cloud computing. But let us leave Config Management for later and focus our attention on that second aspect of IaC. You would recall from the previous chapter some of the benefits of storing all-the-things as code: Code can be kept under version control Code can be shared/collaborated on easily Code doubles as documentation Code is reproducible (For more resources related to this topic, see here.) That last point was a big win for me personally. Automated provisioning helped reduce the time it took to deploy a full-featured cloud environment from four hours down to one and the occurrences of human error to almost zero (one shall not be trusted with an input field). Being able to rapidly provision resources becomes a significant advantage when a team starts using multiple environments in parallel and needs those brought up or down on-demand. In this article we examine in detail how to describe (in code) and deploy one such environment on AWS with minimal manual interaction. For implementing IaC in the Cloud, we will look at two tools or services: Terraform and CloudFormation. We will go through examples on how to: Configure the tool Write an IaC template Deploy a template Deploy subsequent changes to the template Delete a template and remove the provisioned infrastructure For the purpose of these examples, let us assume our application requires a Virtual Private Cloude(VPC) which hosts a Relational Database Services(RDS) back-end and a couple of Elastic Compute Cloud (EC2) instances behind an Elastic Load Balancing (ELB). We will keep most components behind Network Address Translation (NAT), allowing only the load-balancer to be accessed externally. IaC using Terraform One of the tools that can help deploy infrastructure on AWS is HashiCorp's Terraform (https://www.terraform.io). HashiCorp is that genius bunch which gave us Vagrant, Packer and Consul. I would recommend you look up their website if you have not already. Using Terraform (TF), we will be able to write a template describing an environment, perform a dry run to see what is about to happen and whether it is expected, deploy the template and make any late adjustments where necessary—all of this without leaving the shell prompt. Configuration Firstly, you will need to have a copy of TF (https://www.terraform.io/downloads.html) on your machine and available on the CLI. You should be able to query the currently installed version, which in my case is 0.6.15: $ terraform --version Terraform v0.6.15 Since TF makes use of the AWS APIs, it requires a set of authentication keys and some level of access to your AWS account. In order to deploy the examples in this article you could create a new Identity and Access Management (IAM) user with the following permissions: "autoscaling:CreateAutoScalingGroup", "autoscaling:CreateLaunchConfiguration", "autoscaling:DeleteLaunchConfiguration", "autoscaling:Describe*", "autoscaling:UpdateAutoScalingGroup", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVpc", "ec2:Describe*", "ec2:ModifySubnetAttribute", "ec2:RevokeSecurityGroupEgress", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:Describe*", "elasticloadbalancing:ModifyLoadBalancerAttributes", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup", "rds:Describe*" Please refer:${GIT_URL}/Examples/Chapter-2/Terraform/iam_user_policy.json One way to make the credentials of the IAM user available to TF is by exporting the following environment variables: $ export AWS_ACCESS_KEY_ID='user_access_key' $ export AWS_SECRET_ACCESS_KEY='user_secret_access_key' This should be sufficient to get us started. Template design Before we get to coding, here are some of the rules: You could choose to write a TF template as a single large file or a combination of smaller ones. Templates can be written in pure JSON or TF's own format. Terraform will look for files with extensions .tfor .tf.json in a given folder and load these in alphabetical order. TF templates are declarative, hence the order in which resources appear in them does not affect the flow of execution. A Terraform template generally consists of three sections: resources, variables and outputs. As mentioned in the preceding section, it is a matter of personal preference how you arrange these, however, for better readability I suggest we make use of the TF format and write each section to a separate file. Also, while the file extensions are of importance, the file names are up to you. Resources In a way, this file holds the main part of a template, as the resources represent the actual components that end up being provisioned. For example, we will be using a VPC resource, RDS, an ELB one and a few others. Since template elements can be written in any order, Terraform determines the flow of execution by examining any references that it finds (for example a VPC should exist before an ELB which is said to belong to it is created). Alternatively, explicit flow control attributes such as the depends_on are used, as we will observe shortly. To find out more, let us go through the contents of the resources.tf file. Please refer to: ${GIT_URL}/Examples/Chapter-2/Terraform/resources.tf First we tell Terraform what provider to use for our infrastructure: # Set a Provider provider "aws" { region = "${var.aws-region}" } You will notice that no credentials are specified, since we set those as environment variables earlier. Now we can add the VPC and its networking components: # Create a VPC resource "aws_vpc""terraform-vpc" { cidr_block = "${var.vpc-cidr}" tags { Name = "${var.vpc-name}" } } # Create an Internet Gateway resource "aws_internet_gateway""terraform-igw" { vpc_id = "${aws_vpc.terraform-vpc.id}" } # Create NAT resource "aws_eip""nat-eip" { vpc = true } So far we have declared the VPC, its Internet and NAT gateways plus a set of public and private subnets with matching routing tables. It will help clarify the syntax if we examined some of those resource blocks, line by line: resource "aws_subnet""public-1" { The first argument is the type of the resource followed by an arbitrary name. vpc_id = "${aws_vpc.terraform-vpc.id}" The aws_subnet resource named public-1 has a property vpc_id which refers to the id attribute of a different resource of type aws_vpc named terraform-vpc. Such references to other resources implicitly define the execution flow, that is to say the VPC needs to exist before the subnet can be created. cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 1)}" We will talk more about variables in a moment, but the format is var.var_name. Here we use the cidrsubnet function with the vpc-cidr variable which returns a cidr_block to be assigned to the public-1 subnet. Please refer to the Terraform documentation for this and other useful functions. Next we add a RDS to the VPC: resource "aws_db_instance""terraform" { identifier = "${var.rds-identifier}" allocated_storage = "${var.rds-storage-size}" storage_type= "${var.rds-storage-type}" engine = "${var.rds-engine}" engine_version = "${var.rds-engine-version}" instance_class = "${var.rds-instance-class}" username = "${var.rds-username}" password = "${var.rds-password}" port = "${var.rds-port}" vpc_security_group_ids = ["${aws_security_group.terraform-rds.id}"] db_subnet_group_name = "${aws_db_subnet_group.rds.id}" } Here we see mostly references to variables with a few calls to other resources. Following the RDS is an ELB: resource "aws_elb""terraform-elb" { name = "terraform-elb" security_groups = ["${aws_security_group.terraform-elb.id}"] subnets = ["${aws_subnet.public-1.id}", "${aws_subnet.public-2.id}"] listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } tags { Name = "terraform-elb" } } Lastly we define the EC2 auto scaling group and related resources: resource "aws_launch_configuration""terraform-lcfg" { image_id = "${var.autoscaling-group-image-id}" instance_type = "${var.autoscaling-group-instance-type}" key_name = "${var.autoscaling-group-key-name}" security_groups = ["${aws_security_group.terraform-ec2.id}"] user_data = "#!/bin/bash n set -euf -o pipefail n exec 1>>(logger -s -t $(basename $0)) 2>&1 n yum -y install nginx; chkconfig nginx on; service nginx start" lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group""terraform-asg" { name = "terraform" launch_configuration = "${aws_launch_configuration.terraform-lcfg.id}" vpc_zone_identifier = ["${aws_subnet.private-1.id}", "${aws_subnet.private-2.id}"] min_size = "${var.autoscaling-group-minsize}" max_size = "${var.autoscaling-group-maxsize}" load_balancers = ["${aws_elb.terraform-elb.name}"] depends_on = ["aws_db_instance.terraform"] tag { key = "Name" value = "terraform" propagate_at_launch = true } } The user_data shell script above will install and start NGINX onto the EC2 node(s). Variables We have made great use of variables to define our resources, making the template as re-usable as possible. Let us now look inside variables.tf to study these further. Similarly to the resources list, we start with the VPC : Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/variables.tf variable "aws-region" { type = "string" description = "AWS region" } variable "aws-availability-zones" { type = "string" description = "AWS zones" } variable "vpc-cidr" { type = "string" description = "VPC CIDR" } variable "vpc-name" { type = "string" description = "VPC name" } The syntax is: variable "variable_name" { variable properties } Where variable_name is arbitrary, but needs to match relevant var.var_name references made in other parts of the template. For example, variable aws-region will satisfy the ${var.aws-region} reference we made earlier when describing the region of the provider aws resource. We will mostly use string variables, however there is another useful type called map which can hold lookup tables. Maps are queried in a similar way to looking up values in a hash/dict (Please see: https://www.terraform.io/docs/configuration/variables.html). Next comes RDS: variable "rds-identifier" { type = "string" description = "RDS instance identifier" } variable "rds-storage-size" { type = "string" description = "Storage size in GB" } variable "rds-storage-type" { type = "string" description = "Storage type" } variable "rds-engine" { type = "string" description = "RDS type" } variable "rds-engine-version" { type = "string" description = "RDS version" } variable "rds-instance-class" { type = "string" description = "RDS instance class" } variable "rds-username" { type = "string" description = "RDS username" } variable "rds-password" { type = "string" description = "RDS password" } variable "rds-port" { type = "string" description = "RDS port number" } Finally, EC2: variable "autoscaling-group-minsize" { type = "string" description = "Min size of the ASG" } variable "autoscaling-group-maxsize" { type = "string" description = "Max size of the ASG" } variable "autoscaling-group-image-id" { type="string" description = "EC2 AMI identifier" } variable "autoscaling-group-instance-type" { type = "string" description = "EC2 instance type" } variable "autoscaling-group-key-name" { type = "string" description = "EC2 ssh key name" } We now have the type and description of all our variables defined in variables.tf, however no values have been assigned to them yet. Terraform is quite flexible with how this can be done. We could: Assign default values directly in variables.tf variable "aws-region" { type = "string"description = "AWS region"default = 'us-east-1' } Not assign a value to a variable, in which case Terraform will prompt for it at run time Pass a -var 'key=value' argument(s) directly to the Terraform command, like so: -var 'aws-region=us-east-1' Store key=value pairs in a file Use environment variables prefixed with TF_VAR, as in TF_VAR_ aws-region Using a key=value pairs file proves to be quite convenient within teams, as each engineer can have a private copy (excluded from revision control). If the file is named terraform.tfvars it will be read automatically by Terraform, alternatively -var-file can be used on the command line to specify a different source. Below is the content of our sample terraform.tfvars file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/terraform.tfvars autoscaling-group-image-id = "ami-08111162" autoscaling-group-instance-type = "t2.nano" autoscaling-group-key-name = "terraform" autoscaling-group-maxsize = "1" autoscaling-group-minsize = "1" aws-availability-zones = "us-east-1b,us-east-1c" aws-region = "us-east-1" rds-engine = "postgres" rds-engine-version = "9.5.2" rds-identifier = "terraform-rds" rds-instance-class = "db.t2.micro" rds-port = "5432" rds-storage-size = "5" rds-storage-type = "gp2" rds-username = "dbroot" rds-password = "donotusethispassword" vpc-cidr = "10.0.0.0/16" vpc-name = "Terraform" A point of interest is aws-availability-zones, it holds multiple values which we interact with using the element and split functions as seen in resources.tf. Outputs The third, mostly informational part of our template contains the Terraform Outputs. These allow for selected values to be returned to the user when testing, deploying or after a template has been deployed. The concept is similar to how echo statements are commonly used in shell scripts to display useful information during execution. Let us add outputs to our template by creating an outputs.tf file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/outputs.tf output "VPC ID" { value = "${aws_vpc.terraform-vpc.id}" } output "NAT EIP" { value = "${aws_nat_gateway.terraform-nat.public_ip}" } output "ELB URI" { value = "${aws_elb.terraform-elb.dns_name}" } output "RDS Endpoint" { value = "${aws_db_instance.terraform.endpoint}" } To configure an output you simply reference a given resource and its attribute. As shown in preceding code, we have chosen the ID of the VPC, the Elastic IP address of the NAT gateway, the DNS name of the ELB and the Endpoint address of the RDS instance. The Outputs section completes the template in this example. You should now have four files in your template folder: resources.tf, variables.tf, terraform.tfvars and outputs.tf. Operations We shall examine five main Terraform operations: Validating a template Testing (dry-run) Initial deployment Updating a deployment Removal of a deployment In the following command line examples, terraform is run within the folder which contains the template files. Validation Before going any further, a basic syntax check should be done with the terraform validate command. After renaming one of the variables in resources.tf, validate returns an unknown variable error: $ terraform validate Error validating: 1 error(s) occurred: * provider config 'aws': unknown variable referenced: 'aws-region-1'. define it with 'variable' blocks Once the variable name has been corrected, re-running validate returns no output, meaning OK. Dry-run The next step is to perform a test/dry-run execution with terraform plan, which displays what would happen during an actual deployment. The command returns a colour coded list of resources and their properties or more precisely: $ terraform plan Resources are shown in alphabetical order for quick scanning. Green resources will be created (or destroyed and then created if an existing resource exists), yellow resources are being changed in-place, and red resources will be destroyed. To literally get the picture of what the to-be-deployed infrastructure looks like, you could use terraform graph: $ terraform graph > my_graph.dot DOT files can be manipulated with the Graphviz open source software (Please see : http://www.graphviz.org) or many online readers/converters. Below is a portion of a larger graph representing the template we designed earlier: Deployment If you are happy with the plan and graph, the template can now be deployed using terraform apply: $ terraform apply aws_eip.nat-eip: Creating... allocation_id: "" =>"<computed>" association_id: "" =>"<computed>" domain: "" =>"<computed>" instance: "" =>"<computed>" network_interface: "" =>"<computed>" private_ip: "" =>"<computed>" public_ip: "" =>"<computed>" vpc: "" =>"1" aws_vpc.terraform-vpc: Creating... cidr_block: "" =>"10.0.0.0/16" default_network_acl_id: "" =>"<computed>" default_security_group_id: "" =>"<computed>" dhcp_options_id: "" =>"<computed>" enable_classiclink: "" =>"<computed>" enable_dns_hostnames: "" =>"<computed>" Apply complete! Resources: 22 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the following path. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the terraform show command. State path: terraform.tfstate Outputs: ELB URI = terraform-elb-xxxxxx.us-east-1.elb.amazonaws.com NAT EIP = x.x.x.x RDS Endpoint = terraform-rds.xxxxxx.us-east-1.rds.amazonaws.com:5432 VPC ID = vpc-xxxxxx At the end of a successful deployment, you will notice the Outputs we configured earlier and a message about another important part of Terraform – the state file.(Please refer to: https://www.terraform.io/docs/state/): Terraform stores the state of your managed infrastructure from the last time Terraform was run. By default this state is stored in a local file named terraform.tfstate, but it can also be stored remotely, which works better in a team environment. Terraform uses this local state to create plans and make changes to your infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real infrastructure. In a sense, the state file contains a snapshot of your infrastructure and is used to calculate any changes when a template has been modified. Normally you would keep the terraform.tfstate file under version control alongside your templates. In a team environment however, if you encounter too many merge conflicts you can switch to storing the state file(s) in an alternative location such as S3 (Please see: https://www.terraform.io/docs/state/remote/index.html). Allow a few minutes for the EC2 node to fully initialize then try loading the ELB URI from the preceding Outputs in your browser. You should be greeted by NGINX as shown in the following screenshot: Updates As per Murphy 's Law, as soon as we deploy a template, a change to it will become necessary. Fortunately, all that is needed for this is to update and re-deploy the given template. Let us say we need to add a new rule to the ELB security group (shown in bold below). Update the resource "aws_security_group""terraform-elb" block in resources.tf: resource "aws_security_group""terraform-elb" { name = "terraform-elb" description = "ELB security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "80" to_port = "80" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = "443" to_port = "443" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } Verify what is about to change $ terraform plan ... ~ aws_security_group.terraform-elb ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" Plan: 0 to add, 1 to change, 0 to destroy. Deploy the change: $ terraform apply ... aws_security_group.terraform-elb: Modifying... ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" aws_security_group.terraform-elb: Modifications complete ... Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Some update operations can be destructive (Please refer: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html.You should always check the CloudFormation documentation on the resource you are planning to modify to see whether a change is going to cause any interruption. Terraform provides some protection via the prevent_destroy lifecycle property (Please refer: https://www.terraform.io/docs/configuration/resources.html#prevent_destroy). Removal This is a friendly reminder to always remove AWS resources after you are done experimenting with them to avoid any unexpected charges. Before performing any delete operations, we will need to grant such privileges to the (terraform) IAM user we created in the beginning of this article. As a shortcut, you could temporarily attach the Aministrato rAccess managed policy to the user via the AWS Console as shown in the following figure: To remove the VPC and all associated resources that we created as part of this example, we will use terraform destroy: $ terraform destroy Do you really want to destroy? Terraform will delete all your managed infrastructure. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes Terraform asks for a confirmation then proceeds to destroy resources, ending with: Apply complete! Resources: 0 added, 0 changed, 22 destroyed. Next, we remove the temporary admin access we granted to the IAM user by detaching the Aministrator Access managed policy as shown in the following screenshot: Then verify that the VPC is no longer visible in the AWS Console. Summary In this aticlewe looked at the importance and usefulness of Infrastructure as Code and ways to implement it using Terraform or AWS CloudFormation. We examined the structure and individual components of both a Terraform and a CF template then practiced deploying those onto AWS using the CLI.I trust that the examples we went through have demonstrated the benefits and immediate gains from the practice of deploying infrastructure as code. So far however, we have only done half the job. With the provisioning stage completed, you would naturally want to start configuring your infrastructure. Resources for Article: Further resources on this subject: Provision IaaS with Terraform [article] Ansible – An Introduction [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 3843
article-image-can-devops-promote-empathy-in-software-engineering
Richard Gall
14 Oct 2019
8 min read
Save for later

Can DevOps promote empathy in software engineering?

Richard Gall
14 Oct 2019
8 min read
If DevOps is, at a really basic level, about getting different teams to work together, then you could say that DevOps is a discipline that promotes empathy. It’s an interesting idea, and one that’s explored in Viktor Farcic’s book The DevOps Paradox. What’s particularly significant about the notion of empathy existing inside DevOps is that it could help us to focus on exactly what we’re trying to achieve by employing it. In turn, this could help us develop or evolve the way we actually do DevOps - so, instead of worrying about a new toolchain, or new platform purchases, we could, perhaps, just explore ways of getting people to simply understand what their respective needs are. However, in the DevOps Paradox there are a number of different insights on empathy in DevOps and what it means not just for the field itself, but also its place in a modern business. Let’s take a look at some DevOps experts thoughts on empathy and DevOps. "Empathy helps developers put the user at the center of what they do" Jeff Sussna (@jeffsussna) is an IT consultant and coach that helps organizations to design, build and deliver products quickly. "There's a lot of confusion and anxiety about [empathy’s] meaning, and a lot of people tend to misunderstand it. Sometimes people think empathy means wallowing in someone else's pain. In fact, there's actually a philosopher from Yale University who is now putting out the idea that empathy is actually bad, and that it's the cause of all of the world's problems and what we need instead is compassion. "From my perspective, that represents a misunderstanding of both empathy and compassion, but my favorite is when people say things like, "Sociopaths are really good at empathizing". My answer to that is, if you have a sociopath in your organization, you have a much bigger problem, and DevOps isn't going to solve it. At that point, you have an HR problem. What you need to distinguish between is emotional empathy and cognitive empathy, and I use cognitive empathy in the context of DevOps in a very simple way, which is the ability to think about things as if from another's perspective. "If you're a developer and you think, ‘What is the experience of deploying and running my application going to be?’ you're thinking about it from the perspective of the operations person. "If you're an operations person and you're thinking in terms of, ‘What is the experience going to be when you need to spin up a test server in a matter of hours in order to test a hotfix because all of your testing swim lanes are full of other things, and what does that mean for my process of provisioning servers?’ then you're thinking about things from the tester's point of view. "And so, to me, that's empathy, and that's empathizing, which is really at the heart of customer service. It's at the heart of design thinking, and it's at the heart of product development. What is it that our customers are trying to accomplish, what help do they need from us, and how can we help them?" Listen: Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] "As soon as you have empathy you can understand why you provide value" Damien Duportal (@DamienDuportal) is a Developer Advocate at Traefik, Containous’ cloud native edge router. "If you have a tool that helps you to share empathy, then you have a great foundation for starting the conversation. Even if this seems boring to engineers, at least they'll start talking and listening to each other. I mean, once they've stopped debating sterile tabs versus spaces or JavaScript versus Java—or whatever sterile debate it is—they'll have to focus on the value they're going to provide. So, this is really how I would sum up DevOps, which again is about how you bring empathy back and focus on the value creation and interaction side of IT. "Empathy is one of the most advanced bricks you can have for building human interaction. If we are able to achieve so many different things—with different people, different opinions, and different cultures—it's because we, as humans, are capable of having high levels of empathy. "As soon as you have empathy, you can understand why you provide value. If you don't, then what's the point of trying to create value? It will only be from your point of view, and there are over seven billion other people in the world. So, ultimately, we need empathy to understand what we are going to do with our tools." Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin “Let’s not wait for culture to change: culture is in the rearview mirror” As CSO of PraxisFlow, Kevin Behr spends his time working with clients who seek to develop their DevOps process. His 25 years of experience have been driven by a passion for engaging with the complex problems that large IT organizations face, and how we can use DevOps to solve them. You can follow Kevin on Twitter at @kevinbehr. What do we mean when we talk about empathy in DevOps? We're saying that we understand what it feels like to do what you're doing and that I'll never do that to you again. So, let's build a system together that will allow us to never be there. DevOps to me has evolved into a lot of tools because we're humans, and humans love tools of all kinds. As a species, we've defined ourselves by our tools and technologies. And, as a species, we also talk about culture a lot, but, to my mind, culture is a rearview mirror. Culture is just all the things that we've done: our organizational disposition. The way to change culture is to do things differently. Let's not wait for culture, because culture is in the rearview mirror: it's the past. If you're in a transition, then what are you transitioning toward and what does that mean about how you need to act? The very interesting thing about DevOps is that while frequently, its mission is to create a change in the culture of an organization, this change requires far more than coordination: it also requires pure collaboration, and co-laboring. These can be particularly awkward to achieve given the likelihood that we haven't worked with the people in an organization before. And it can become intensely awkward, when those people may have already made villains out of each other because they couldn't get what they wanted. The goal of the DevOps process is to create a new culture, despite these challenges.... ....When you manage to introduce empathy to a team, the development and the operations people seem finally to come together. You suddenly hear someone in operations say, 'Oh, can we do that differently? When you threw that thing at me last time, it gave me a black eye and I had to stay up for four days straight!' And the developer is like, 'It did? How did it do that? Next time, if something happens, please call me, I want to come help.' That empathy of figuring out what went wrong, and working together, is what builds trust." “The CFO doesn’t give a shit about empathy” Chris Riley (@HoardingInfo) is a self-proclaimed bad coder turned editor of Sweetcode.io at fixate.io, a content marketing firm for those who sell to technical audiences. Through this, he's involved with DevOps, SecOps, big data, machine learning, and blockchain. He's a member of the DevOps Institute Board of Regents, a position he's held for over four years. “...The CFO doesn't give a shit about empathy, and the person with the money may not care about that at all. The HR department might, but that's the problem with selling anything. You have to speak their language, and the CFO is going to respond to money. Either you're saving us money, or you're making us more money, and I think DevOps is doing both, which is cool. I think what's nice about that explanation is the fact it doesn't seem insurmountable. It's kind of like how Pixar was structured. “After Steve Jobs started at Pixar, he structured all of the work environments where the idea was to create chance encounters among the employees, so that the graphic designer of one movie would talk to the application developer of another, even when they don't even have any real reason to interact with each other. The way they did it at Pixar was that, as everybody has to go to the bathroom, they put the bathrooms in a large communal area where these people are going to run into each other—that's what created that empathy. They understand what each other's job is. They're excited about each other's movies. They're excited about what they're working on, and they're aware of that in everything they do. It's a really good explanation.” What do you think? Can DevOps help promote empathy inside engineering teams and across wider businesses? Or is there anything else we should be doing?
Read more
  • 0
  • 0
  • 3774

article-image-chaos-engineering-comes-to-kubernetes-thanks-to-gremlin
Richard Gall
18 Nov 2019
2 min read
Save for later

Chaos engineering comes to Kubernetes thanks to Gremlin

Richard Gall
18 Nov 2019
2 min read
Kubernetes causes problems. Just last week Cindy Sridharan wrote on Twitter that while Docker "succeeded... because it was a great developer tool," Kubernetes "decided to be all things tech and not much by way of UX. It was and remains a hostile piece of software to learn, run, operate, maintain." https://twitter.com/copyconstruct/status/1194701905248673792?s=20 That's just a drop in the ocean - you don't have to look hard to find more hot takes, jokes, and memes about how complicated working with Kubernetes can feel. Despite all this, it's certainly here to stay. That makes chaos engineering platform Gremlin's announcement that the platform will offer native support for Kubernetes particularly welcome. Citing container orchestration research done by Datadog in the press release, which indicates the rapid rate of Kubernetes adoption, Gremlin is hoping that it can provide some additional support for users that might be concerned about the platforms complexity. From last year: Gremlin makes chaos engineering with Docker easier with new container discovery feature Gremlin CTO Matt Fornaciari said "our goal is to provide SRE and DevOps teams that are building and deploying modern applications with the tools and processes necessary to understand how their systems handle failure, before that failure has the chance to impact customers and business." The new feature is designed to help engineers do exactly that by allowing them "to automate the process of identifying Kubernetes primitives such as nodes and pods," and to select and attack traffic from different services within Kubernetes. The other important element to all this is that Gremlin wants to make things as straightforward as possible for engineering teams. With a neat and easy to use UI, it would seem that, to return to Sridharan's words, the team are eager to make sure their product is "a great developer tool."   The tool has already been tried and tested in the wild. Simon Govier, Expedia's Director of Program Management described how performing chaos experiments on Kubernetes with Gremlin "significantly reduces the amount of time it takes to do fault injection and increases our systems' resilience to failure." Learn more on the Gremlin website.
Read more
  • 0
  • 0
  • 3712

article-image-chaos-engineering-company-gremlin-launches-scenarios-making-it-easier-to-tackle-downtime-issues
Richard Gall
26 Sep 2019
2 min read
Save for later

Chaos engineering company Gremlin launches Scenarios, making it easier to tackle downtime issues

Richard Gall
26 Sep 2019
2 min read
At the second ChaosConf in San Francisco, Gremlin CEO Kolton Andrus revealed the company's latest step in its war against downtime: 'Scenarios.' Scenarios makes it easy for engineering teams to simulate a common issues that lead to downtime. It's a natural and necessary progression for Gremlin that is seeing even the most forward thinking teams struggling to figure out how to implement chaos engineering in a way that's meaningful to their specific use case. "Since we released Gremlin Free back in February thousands of customers have signed up to get started with chaos engineering," said Andrus. "But many organisations are still struggling to decide which experiments to run in order to avoid downtime and outages." Scenarios, then, is a useful way into chaos engineering for teams that are reticent about taking their first steps. As Andrus notes, it makes it possible to inject failure "with a couple of clicks." What failure scenarios does Scenarios let engineering teams simulate? Scenarios lets Gremlin users simulate common issues that can cause outages. These include: Traffic spikes (think Black Friday site failures) Network failures Region evacuation This provides a great starting point for anyone that wants to stress test their software. Indeed, it's inevitable that these issues will arise at some point so taking advance steps to understand what the consequences could be will minimise their impact - and their likelihood. Why chaos engineering? Over the last couple of years plenty of people have been attempting to answer why chaos engineering? But in truth the reasons are clear: software - indeed, the internet as we know it - is becoming increasingly complex, a mesh of interdependent services and platforms. At the same time, the software being developed today is more critical than ever. For eCommerce sites downtime means money, but for those in IoT and embedded systems world (like self-driving cars, for example), it's sometimes a matter of life and death. This makes Gremlin's Scenarios an incredibly exciting an important prospect - it should end the speculation and debate about whether we should be doing chaos engineering, and instead help the world to simply start doing it. At ChaosConf Andrus said that Gremlin's mission is to build a more reliable internet. We should all hope they can deliver.
Read more
  • 0
  • 0
  • 3445
article-image-was-2019-the-year-the-world-caught-the-kubernetes-fever
Guest Contributor
17 Dec 2019
8 min read
Save for later

Was 2019 the year the world caught the Kubernetes fever?

Guest Contributor
17 Dec 2019
8 min read
In the current IT landscape, phrases such as “containerized applications” and “container deployment” are thrown around so often, that the meanings and connotations behind them often get tampered, and ultimately forgotten. In the case of Kubernetes, however, the opposite seems to be coming true. Although it might seem hyperbolic to refer to the modern interaction with software management as being heavily influenced by the “Age of Kubernetes”-  the accelerating growth of Kubernetes as one of the most widely adopted open-source project, with over 2300 active contributors to Kubernetes’s repository on GitHub bears witness to the massive influence that the orchestration platform has had. Originally developed by Google, and launched in 2014- Kubernetes has come a really long way since it’s advent. Although there are other similar container orchestration platforms available on the market, the most notable ones being Docker Swarm and Apache Mesos; Kubernetes has established itself as the de-facto orchestration platform in use today. Having said that, as a quick Google search might reveal- with a whopping 26,400,000 results- Kubernetes has risen to the top of the totem pole over the course of the year. However, before we can get into rationalizing the reasons that drive the world’s obsession with the container orchestration platform, we’d like to provide our readers with a quick snapshot of everything Kubernetes is and everything that it is not. Kubernetes: A Brief Overview The transition from the traditional deployment era, where organizations used to rely on applications being run on physical servers to the virtual deployment era, in which the highly popular concept of virtualization was introduced- to the container deployment era, which saw the employment of  ‘containers’ that are significantly lighter in weight, as compared to virtual machines (VMs)- these changes ultimately led to the creation of a container orchestration market, which is a huge contributing factor to the growing popularity of Kubernetes and other similar platforms. Having said that, however, as we’ve already mentioned above- the features that Kubernetes offers to organizations enable it to have a certain edge over its competition. Originally developed by Google in 2014, having descended from an old-school container orchestration platform called ‘Borg,’ Kubernetes is an open-source container orchestration platform that reduces the workload for both large and small companies, by automating the deployment, scaling and management of containerized applications. Bearing witness to the effectiveness and reliability of the container orchestration application is the fact that it is imbursed by gigantic digital entities such as Google, Microsoft, Cisco, Intel, and Red Hat. Furthermore, on their website, Kubernetes cites several testimonials from colossal corporations such as Spotify, Nav, Capital One, Comcast- which further goes on to demonstrate the reliability of the benefits offered by the container orchestration platform. What functions does Kubernetes perform? Taking into consideration the fact that most organizations, regardless of how large or small they might be, are deploying hundreds and thousands of containerized instances daily- the complexity of the situation requires platforms such as Kubernetes to step in and help organizations manage and automate containerized processes while taking into account the context of the microservice architecture as well. Kubernetes aids development teams by deploying applications and helping in the management of the containerized applications by performing the following functions: Deployment: Perhaps the most significant function that Kubernetes performs includes the deployment of a specified number of containers to a host, along with ensuring that the containers are functioning as they are supposed to, that is, without any malfunctions, etc. Rollouts: A rollout refers to a change in the original deployment of a container. Kubernetes allows development teams to take the management of their containerized tasks to the next level, by automating the initiation of the container deployment, along with offering them the option of pausing, resuming or rolling back any rollouts. Discovery of service: Kubernetes automates the exposure of a specified container to the internet, or to other containers, by allotting to containers a DNS name or an IP address. Since the increasing threats and risks of cyber-attacks, it has become essential to protect your IP address. To do so use a VPN as it not only hides the IP address but also provides protection against IP spoofing. Managing storage: A monumental advantage that Kubernetes offers organization is the liberty to allocate persistent local or cloud storage to specified containers as needed. Load scaling and balancing: Kubernetes allows for organizations to maintain stability across the network by automatically load balancing and scaling in the instance that traffic to a certain container increases. Self-healing: A feature unique to Kubernetes, the widely popular container orchestration platform seeks to improve the availability on the network through restarting or replacing a failed container. Moreover, Kubernetes can also automate the removal of containers that appear to be damaged, or fail to meet the health-check requirements. Are there any limitations to Kubernetes’s power? Up till now, we’ve done nothing but present facts regarding Kubernetes. Often times, however, organizations tend to overlook the limitations of an effective management tool. Despite the numerous advantages that organizations get to reap with the integration of Kubernetes, the fact that Kubernetes is not a traditional software and functions on a container level, rather than at the hardware-level should always be kept in mind. In order to make the most effective use of the container orchestration platform, it is essential that companies take into account the limitations of Kubernetes- which consist of the following: Kubernetes does not build applications, neither does it deploy source code. Kubernetes is not responsible for providing organizations with services centric to applications. Examples of these application-level services include middleware (message buses) and other data-processing frameworks such as Spark, caches, amongst many others. Kubernetes does not offer to organizations logging, monitoring, and alerting solutions, instead it provides integrations and mechanisms which then enable organizations to collect and export metrics. In addition to these limitations, it should also be mentioned that despite the constant referral of Kubernetes as an orchestration tool- it is not just that. Instead of simply orchestrating or managing the containerized applications by propagating a defined workflow, Kubernetes eliminates the need for orchestration altogether and consists of components that constantly drive the current state of the network into providing the desired result to the organization. Furthermore, Kubernetes also gives rise to a system without any centralized control, which makes it much more easier to use. Explaining Kubernetes’s popularity Now that we’ve hopefully jogged up our reader’s memories by providing them with a rundown of everything Kubernetes- let’s get down to business. Taking into consideration the ever-increasing growth and popularity of the container orchestration platform, particularly it’s a spike in 2019- readers might be left wondering with the question; “Why is Kubernetes so popular?” Well, the short explanation behind Kubernetes’s popularity is simple- it’s highly effective. The longer explanation, on the other hand, however, can be broken down into the following main reasons: Kubernetes saves time: In the digital age, time is more crucial than ever. As more and more organizations get digitized, time plays a monumental role in routine operations, especially where development teams are concerned. The staggering popularity of Kubernetes is deeply rooted in how time-effective, a platform is since it allows organizations to effectively handle all facets of container orchestration without having to fill out forms or send emails to request new machines to run applications. 2. Kubernetes is highly cost-effective: For most enterprises, the driving force behind their operations is the knowledge that their business goal is being fulfilled. Kubernetes can actually contribute to that since it allows for organizations to partake in better resource utilization. As we’ve already mentioned above, Kubernetes is a much more improved alternative to VMs, since it focuses solely on containers, which are light-weight, and thus require less CPU and memory resources. 3. Kubernetes can run on the cloud, as well as on-premise: An unprecedented, but widely welcomed feature that Kubernetes offers is that it is cloud-agnostic. The term ‘cloud-agnostic’ implies that Kubernetes can run on cloud-based services, as well as on-premise. This offers organizations with the luxury of not having to redesign or alter their infrastructure or applications to accommodate Kubernetes. Additionally, companies are also providing software that helps organizations manage the running of Kubernetes, whether it is on a cloud-based server or on-premise. Final Words We hope that we’ve made it clear what Kubernetes does, and the reasons that led to its rise in popularity. Having said that, however, it is still equally important that organizations take into consideration the limitations of the container orchestration system, and integrate it within their companies smartly- which ultimately enables organizations to leverage better benefits! Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. DevOps mistakes which developers should avoid! Chaos engineering comes to Kubernetes thanks to Gremlin Understanding the role AIOps plays in the present-day IT environment
Read more
  • 0
  • 0
  • 3354

article-image-operations-and-infrastructure-engineering-in-2019-what-really-mattered
Richard Gall
18 Dec 2019
6 min read
Save for later

Operations and infrastructure engineering in 2019: what really mattered

Richard Gall
18 Dec 2019
6 min read
Everything is unreliable, right? If we didn’t realise it before, 2019 was the year when we fully had to accept the reality of the systems we’re building and managing. That was scary, sure, but it was also liberating. But we shouldn’t get carried away: given how highly distributed software systems are now part and parcel in a range of different industries, the issue of reliability and resilience isn’t purely an academic issue: in many instances, it’s urgent and critical. That makes the work of managing and building software infrastructure an incredibly vital role. Back in 2015 I wrote that Docker had turned us all into SysAdmins, but on reflection it may be more accurate to say that we’ve now entered a world where cloud and the infrastructure-as-code revolution has turned everyone into a software developer. Kubernetes is everywhere Kubernetes is arguably the definitive technology of 2019. With the move to containers now fully mainstream, Kubernetes is an integral in helping engineers to deploy and manage containers at scale. The other important element to Kubernetes is that it all but kills off dreaded infrastructure lock-in. It gives you the freedom to build across different environments, and inside a more heterogeneous software infrastructure. From a tooling and skill set perspective that’s a massive win. Although conversations about flexibility and agility have been ongoing in the tech industry for years, with Kubernetes we are finally getting to a place where that’s a reality. This isn’t to say it’s all plain sailing - Kubernetes’ complexity is a point of complaint for many, with many people suggesting that compared to, say, Docker, the developer experience leaves a lot to be desired. But insofar as DevOps and cloud-native have almost become the norm for many engineering teams, Kubernetes casts a huge shadow. Indeed, even if it’s not the right option for you right now, it’s hard to escape the fact that understanding it, and being open to using it in the future, is crucial. Find an extensive range of Kubernetes content in our new cloud bundles.  Serverless and NoOps This year serverless has really come into its own. Although it was certainly gaining traction in 2018, the last 12 months have demonstrated its value as more and more teams have been opting to forgo servers completely. There have been a few arguments about whether serverless is going to kill off containers. It’s not hard to see where this comes from, but in reality there’s no chance that this is going to happen. The way to think of serverless is to see it as an additional option that can be used when speed and agility are particularly important. For large-scale application development and deployment, containers running on ‘traditional’ cloud servers will be the dominant architectural approach. The companion trend to serverless is NoOps. Given the level of automation and abstraction that serverless can give you, the need to configure environments to ensure code runs properly all but disappears - code runs through ‘functions’ that get fired when needed. So, the thinking goes, the need for operations becomes very small indeed. But before anyone starts worrying about their jobs, the death of operations is greatly exaggerated. As noted above, serverless is just one option - it’s not redefining the architectural landscape. It might mean that the way we understand ‘ops’ evolves (just as ‘dev’ has), but it certainly won’t kill it off. Discover and search serverless eBooks and videos on the Packt store. Chaos engineering In the introduction I mentioned that one of the strange quandaries of our contemporary distributed software world is that we’ve essentially made things more unreliable at a time when software systems are being used in ever more critical applications. From healthcare to self-driving cars, we’re entering a world where unreliability is both more common and potentially more damaging. This is where chaos engineering comes in. Although it first appeared on ThoughtWorks Radar back in November 2017 and hasn’t yet moved out of its ‘Trial’ quadrant, in reality chaos engineering has been manifesting itself in a whole host of ways in 2019. Indeed, it’s possible that the term itself is misleading. While it suggests a wholesale methodology, in truth, there are different ways in which the core principles behind it - essentially stress-testing your software in order to manage unpredictability and improve resilience - are being used in different ways for both testing and security purposes. Tools like Gremlin have done a lot to help promote chaos engineering and make it more accessible to organizations that maybe wouldn't see themselves as having the resources to perform cutting-edge approaches. It appears the ground-work has been done, which means it will be interesting to see how it evolves in 2020. Observability: service meshes and tracing One of the biggest challenges when dealing with complex software systems - and one of the reasons why they are necessarily unreliable - is because it can be difficult (sometimes impossible) to get an understanding of what’s actually going on. This is why the debate around observability and monitoring has moved on. It’s no longer enough to have a set of discrete logs and metrics. Chances are that they won’t capture the subtleties of what’s happening, or won’t be able to provide you with context that helps you to actually understand where errors are coming from. What’s more, a lack of observability and the wrong monitoring set up can cause all sorts of issues inside a team. At a time when the role of the on call developer has never been more discussed and, indeed, important, ensuring there’s a level of transparency is the only way to guarantee that all developers are able to support each other and solve problems as they emerge. From this perspective, then, observability has a cultural impact as much as it does a technical one. Learn distributed tracing with Yuri Shkuro from Uber's observability engineering team: find Mastering Distributed Tracing on the Packt store.         Not sure what to learn for 2020? Start exploring thousands of tech eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 3349