In order to maintaining a stable and protected, containerized environment, it is fundamental to ensure that Kubernetes workloads are both resilient and secure:

  1. Resiliency: application can withstand failures and ensure high availability. Opt for scalable workload by distributing your application across multiple Pods, rather than relying on a singular Pod.
  2. Security: granting each Pod precisely the permissions it needs to function effectively. However when it comes to the root user within a Pod, there are typically no restrictions imposed by default.


The selection between stateful pods and stateless pods is critical to design reliable and secure containerized applications:

Stateful podsUser input or data need to be preserved and maintained over time.
For applications like databases.
Require careful handling to ensure data persistence and consistency.
Stateless podsInformation does not need to be preserved or persisted.
For applications that can function independently and do not rely on retaining data.
Easily replaceable and scalable.

Understanding your data and application’s objective helps determine the most suitable deployment strategy. Kubernetes offers various options including standard pods, replica sets, deployments, stateful sets, and demon sets – each tailored to specific use cases.

DeploymentIdeal for stateless applications that need scalability and smooth image update.
StatefulSetDesigned for applications requiring data persistence and suitable network identities.
DaemonSetBest suited for applications that must run on every node within the cluster, enforcing rules of providing cluster level functionality.

Deployment Strategy

The Deployment resource is commonly used to create and manage workloads. Being built on top of another resource ReplicaSet, it is easier to manage and update applications.

An update strategy is a defined approach or set of rules for how changes or updates to a workload are managed and applied. Update strategies include:

Rolling updateGradually update the application while maintaining a balance between old and new replicas. Readiness Probes are used to ensure new pods are ready before scaling down the old ones.
RecreateReplace all old pods with new ones, potentially causing downtime during updates.
CustomUtilized for more advanced scenarios like creating custom controllers and specifying a custom update strategy.

When an update or deployment goes awry or causes issues, Kubernetes rollback strategies are there for reverting to a previous stable state, minimize downtime, and maintain the reliability of your application.

When automatic scaling is required, Kubernetes offers horizontal and vertical pod scaling:

Horizontal pod scaling (HPA)Scales numbers of pod horizontally, based on metrics.
Vertical pod scaling (VPA)Allocates resources vertically, based on resource usage.

The blue-green deployments are a software release management strategy that involves maintaining two separate and identical production environments:

  1. Blue – active production environment, serving live user traffic
  2. Green – inactive, serving as staging or testing environment for upcoming release

Once the green environment is verified and deemed stable, we then have a traffic switch from blue to green. Now:

  1. Blue – kept as safety net. If any problem in green production environment, you can quickly switch back.
  2. Green – active production environment with the new version of application.

In contrast, the canary deployments are a software release strategy that involves gradually rolling out a new version of an application to a sub-set of users until making it available to the entire user base. Canary deployments allow you to test new changes in a controlled and low risk environment before exposing the entire user base to potential issues. Meanwhile this strategy relies on data and metrics to make informed decisions about the stability and performance of the new version.



Services and Ingress

Services and ingress resources are the two essential components for managing network access to applications within a Kubernetes cluster.

ServiceAllow for internal network communication and load balancing within the cluster.
Expose pods as network services within the cluster.
Common types of services: ClusterIP, NodePort, LoadBalancer, ExternalName.
IngressManage external access to services within the cluster.
Route external HTTP/HTTPS traffic to services based on rules and hostnames.
Nginx or Traefik interpret ingress resources, and then act as gateway for external access.

Path-based routing allows you to route traffic to different services within your cluster based on the URL path. To implement it, you can use ingress annotations to define rules that map specific paths to corresponding services.

Host-based routing routes traffic based on the host names specified in the request. Use ingress rules to associate specific host names with corresponding services, enabling multiple applications to share the same IP address and port.

Ingress features and customizations play crucial roles:

  1. Annotations enable customization of various aspects, including SSL termination, rewrite rules and header manipulation.
  2. Ingress controllers offer unique features and behaviors to ensure efficient traffic routing and load balancing.
  3. Load balancing distributes traffic among pods using different load balancing algorithms.
  4. TLS termination terminates SSL/TLS encryption for secure communication with services and reducing computational load.
  5. Session affinity (sticky sessions) ensures request from the same client are routed to the same pod.
  6. Rewrite rules customize URL parths, useful serving content from different locations.
  7. Header manipulation control behavior enhance security and ensure appropriate interaction between clients and services.

There are also several advanced ingress features and customizations you may consider:

  1. Use custom ingress controllers.
  2. Rate limiting controls the number of requests to your services, and protect your services from abuse.
  3. Web Application Firewall for security and protection against threats.
  4. Authentication and authorization secure access to your applications.

Secure Communication

TLS configuration and certificate management are the crucial components for ensuring the confidentiality and integrity of data exchanged between services and applications.

TLS ConfigurationTransport layer security configuration involves setting secure communication channels between applications by encrypting data in transit. TLS configuration ensures that data exchange between pods services or external clients is encrypted and secure. Configuration includes:
1. Defining TLS secrets
2. Configuring ingress resources with TLS certificates
3. Specifying secure communication within application pods
Certificate managementCertificate management focuses on the lifecycle and administration of TLS certificates used in Kubernetes. It encompasses tasks like certificate issuance, renewal, rotation, and revocation. It can also automate the management of certificates to ensure validity.


Kubernetes network policies are resource objects used to define and control the network traffic allowed between pods within a cluster. They enable you to specify the communication rules and restrictions for pods based on labels, namespaces, and other attributes. Network policies offer granular control over inbound and outbound traffic, helping to segment and secure communication within the cluster. These best practice will help you create a robust and protected network environment:

  1. Default deny. Only allow necessary traffic to reduce attack surface.
  2. Label-based policies. Group and manage pods with similar security requirements.
  3. Namespace isolation. Isolate sensitive workloads into dedicated namespaces with stricter policies.
  4. Minimal privileges. Restrict the services pods can communicate with.
  5. Continuous monitoring. Regularly review and update network policies.

Network policies for hosts help control communication between pods and the host machine, enhancing security and resource management. Network policies for nodes extend this control to inter-node communication, allowing you to manage and secure node-to-node traffic. Appropriate policies for hosts and nodes enforce least privilege, protect critical resources, and promote segmentation.

Security add-ons and plugins like Calico, Cilium, and Weave, are tools and extensions that enhance the security of pod communication. They provide features like network policy enforcement, encryption, and threat detection.

TCP Dump is a packet analyzer, captures and analyzes network traffic, aiding in troubleshooting, monitoring and diagnosing network issues within pods or services. Basic TCP policies like “Deny All” and “Allow by namespace” provide initial control over communication. Advanced TCP policies use Network Policies, Ingress/Egress rules, and service mesh policies to enable fine-grained access control and segmentation.

Container Network Interface

In Kubernetes, each pod has its own network namespace which isolates its network stack. These network namespaces allow multiple pods to run on the same node without conflicts. If you need to connect pods:

  • When pods are on the same node, pods use localhost, loopback or node’s IP address to communicate with each other.
  • When pods are on the different nodes, Kubernetes employs various networking solutions like CNI plugins, like Calico, Flannel, Weave, and Cilium.

Kubernetes uses CNI as the mechanism to connect pods to the network. CNI stands for Container Network Interface, which is a standard for configuring network interfaces for containers. CNI configurations are typically found in

/etc/cni/net.d/

on each node in the cluster. It contains key details such as the network name, type, and specific network settings like IP ranges, gateway, and routes.

Common Deployment & Network Issues

Some common issues include:

  1. Pod scheduling failures caused by resource limitation, node problem, or affinity/anti-affinity rules.
  2. Inter-pod communication issues caused by network policy misconfiguration, service misalignment, or DNS resolution problems.
  3. Service discovery problems. Incorrect DNS configuration or improper service registration.
  4. Network latency and performance, potentially caused by network congestion, improper routing or inefficient pod-to-pod communication.
  5. Resource contention when resource-intensive pods vie for CPU, memory or network bandwidth.
  6. Security vulnerabilities.


My Certificate

For more on Kubernetes Deployment and Networking, please refer to the wonderful course here https://www.coursera.org/learn/advanced-kubernetes-first-course-1


Related Quick Recap


I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!