Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Learn Docker - Fundamentals of Docker 18.x

You're reading from   Learn Docker - Fundamentals of Docker 18.x Everything you need to know about containerizing your applications and running them in production

Arrow left icon
Product type Paperback
Published in Apr 2018
Publisher Packt
ISBN-13 9781788997027
Length 398 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Dr. Gabriel N. Schenker Dr. Gabriel N. Schenker
Author Profile Icon Dr. Gabriel N. Schenker
Dr. Gabriel N. Schenker
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Title Page
Packt Upsell
Contributors
Preface
1. What Are Containers and Why Should I Use Them? 2. Setting up a Working Environment FREE CHAPTER 3. Working with Containers 4. Creating and Managing Container Images 5. Data Volumes and System Management 6. Distributed Application Architecture 7. Single-Host Networking 8. Docker Compose 9. Orchestrators 10. Introduction to Docker Swarm 11. Zero Downtime Deployments and Secrets 12. Introduction to Kubernetes 13. Deploying, Updating, and Securing an Application with Kubernetes 14. Running a Containerized App in the Cloud 1. Assessment 2. Other Books You May Enjoy Index

Chapter 12


  1. The Kubernetes master is responsible for managing the cluster. All requests to create objects, the scheduling of pods, the managing of ReplicaSets, and more is happening on the master. The master does not run application workload in a production or production-like cluster.
  2. On each worker node, we have the kubelet, the proxy, and a container runtime.
  3. The answer is Yes. You cannot run standalone containers on a Kubernetes cluster. Pods are the atomic unit of deployment in such a cluster.
  4. All containers running inside a pod share the same Linux kernel network namespace. Thus, all processes running inside those containers can communicate with each other through localhost in a similar way that processes or applications directly running on the host can communicate with each other through localhost.
  5. The pause container's sole role is to reserve the namespaces of the pod for the containers that run in the pod.
  6. This is a bad idea since all containers of a pod are co-located, which means they run on the same cluster node. But the different component of the application (that is, webinventory, and db) usually have very different requirements in regards to scalability or resource consumption. The web component might need to be scaled up and down depending on the traffic and the db component in turn has special requirements on storage that the others don't have. If we do run every component in its own pod, we are much more flexible in this regard. 
  7. We need a mechanism to run multiple instances of a pod in a cluster and make sure that the actual number of pods running always corresponds to the desired number, even when individual pods crash or disappear due to network partition or cluster node failures. The ReplicaSet is this mechanism that provides scalability and self-healing to any application service. 
  8. We need deployment objects whenever we want to update an application service in a Kubernetes cluster without causing downtime to the service. Deployment objects add rolling update and rollback capabilities to ReplicaSets.
  1. Kubernetes service objects are used to make application services participate in service discovery. They provide a stable endpoint to a set of pods (normally governed by a ReplicaSet or a deployment). Kube services are abstractions which define a logical set of pods and a policy on how to access them. There are four types of Kube services:
    • ClusterIP: Exposes the service on an IP address only accessible from inside the cluster; this is a virtual IP (VIP)
    • NodePort: Publishes a port in the range 30,000–32767 on every cluster node
    • LoadBalancer: This type exposes the application service externally using a cloud provider’s load balancer such as ELB on AWS
    • ExternalName: Used when you need to define a proxy for a cluster external service such as a database
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images