📘 KUBERNETES: CONTAINER ORCHESTRATION SYSTEM
🎯 General Objectives
- Understand the operating principles and architecture of Kubernetes.
- Master Kubernetes installation and configuration.
- Know how to deploy and manage containerized applications on Kubernetes.
- Understand the basic components of Kubernetes.
- Deploy highly available and scalable applications.
🧑🏫 Lesson 1: Introduction to Kubernetes
What is Kubernetes?
- Kubernetes (K8s) is an open-source platform for automating the deployment, scaling, and management of containerized applications.
- Developed by Google, based on their experience with the Borg system.
- Currently maintained by the Cloud Native Computing Foundation (CNCF).
Development History
- 2014: Google announced Kubernetes as an open-source project.
- 2015: Kubernetes v1.0 was released, CNCF was founded.
- 2016-present: Kubernetes has become the de-facto standard for container orchestration.
Benefits of Kubernetes
- Automated Deployment: Deploy applications reliably and consistently.
- Self-healing: Automatically restarts containers that fail.
- Automatic Scaling: Automatically scales the number of containers up or down based on load.
- Load Balancing: Distributes network traffic to ensure stable deployments.
- Service Discovery: Containers can find each other via internal DNS.
Alternatives to Kubernetes
- Docker Swarm: Simpler, tightly integrated with Docker.
- Apache Mesos: Focuses on running diverse workloads (not just containers).
- Amazon ECS: AWS's container management service.
- Nomad: From HashiCorp, simpler and lighter.
Common Use Cases
- Microservices: Managing complex applications with many small components.
- CI/CD: Continuous deployment with zero-downtime.
- DevOps: Supporting automated DevOps workflows.
- Big Data: Processing large data with scalability.
- Hybrid Cloud: Running workloads across various cloud environments.
🧑🏫 Lesson 2: Kubernetes Architecture
Architecture Overview
text
+-------------------------------------------------------+
| Kubernetes Cluster |
| |
| +--------------------+ +--------------------+ |
| | | | | |
| | Control Plane | | Worker Nodes | |
| | | | | |
| | +--------------+ | | +--------------+ | |
| | | API Server | | | | Kubelet | | |
| | +--------------+ | | +--------------+ | |
| | | | | |
| | +--------------+ | | +--------------+ | |
| | | Scheduler | | | | Kube-proxy | | |
| | +--------------+ | | +--------------+ | |
| | | | | |
| | +--------------+ | | +--------------+ | |
| | | Controller | | | | Container | | |
| | | Manager | | | | Runtime | | |
| | +--------------+ | | +--------------+ | |
| | | | | |
| | +--------------+ | | | |
| | | etcd | | | | |
| | +--------------+ | | | |
| +--------------------+ +--------------------+ |
+-------------------------------------------------------+Control Plane Components
API Server (kube-apiserver):
- HTTP API endpoint for interacting with Kubernetes.
- The main gateway for controlling the cluster.
- Authenticates and authorizes all requests.
Scheduler (kube-scheduler):
- Watches for newly created pods with no assigned node.
- Selects a suitable node for them to run on.
- Considers resources, constraints, affinity, anti-affinity, etc.
Controller Manager (kube-controller-manager):
- Runs controller processes.
- Controls the state of the cluster.
- Contains various controllers: Node Controller, Replication Controller, Endpoint Controller, etc.
etcd:
- Distributed key-value store.
- Stores all cluster data.
- Ensures consistency and high availability.
Node Components
Kubelet:
- An agent that runs on each node.
- Ensures containers are running in a pod.
- Reports node status to the control plane.
Kube-proxy:
- Maintains network rules on nodes.
- Allows network communication to pods from inside or outside the cluster.
- Performs load balancing for services.
Container Runtime:
- Software responsible for running containers.
- Examples: Docker, containerd, CRI-O.
Important Add-ons
- CoreDNS: Provides DNS for the cluster.
- Dashboard: UI for Kubernetes management.
- Ingress Controller: Manages external traffic to services.
- CNI (Container Network Interface): Plugin for networking between pods.
Operation Model
- When a request is made (e.g., deploy an app), the client sends a request to the API Server.
- API Server authenticates and processes the request, saves the state to etcd.
- Controllers detect state changes and perform actions.
- Scheduler decides which node the pod will run on.
- Kubelet on the node receives the info and creates the pod.
- Kube-proxy configures networking for the pod.
🧑🏫 Lesson 3: Installing and Configuring Kubernetes
Installation Methods
- Minikube: For development environments, runs Kubernetes locally.
- kubeadm: Official tool for bootstrapping Kubernetes clusters.
- kind (Kubernetes IN Docker): Runs Kubernetes clusters using Docker containers nodes.
- Managed Services: EKS (AWS), GKE (Google), AKS (Azure).
Installing Minikube for Development
bash
# Install Minikube on Linux
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
# Start the cluster
minikube start
# Check status
minikube statusInstalling kubectl - CLI tool for Kubernetes
bash
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Check version
kubectl version --clientInstalling a cluster with kubeadm
bash
# 1. Install container runtime (e.g., Docker)
# 2. Install kubeadm, kubelet, and kubectl
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
add-apt-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt-get update
apt-get install -y kubelet kubeadm kubectl
# 3. Initialize control plane
kubeadm init --pod-network-cidr=10.244.0.0/16
# 4. Configure kubectl for user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 5. Install network plugin (e.g., Calico)
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 6. Join worker nodes
# Use the command output from kubeadm init
kubeadm join <control-plane-ip>:<port> --token <token> --discovery-token-ca-cert-hash <hash>Verifying Installation
bash
# Check node status
kubectl get nodes
# Check pods in kube-system namespace
kubectl get pods -n kube-system
# Check server and client version
kubectl versionConfiguring Kubernetes
Contexts and Clusters:
bash# List contexts kubectl config get-contexts # Switch context kubectl config use-context my-cluster # View current config kubectl config viewImportant Config Files:
/etc/kubernetes/: Contains cluster configuration.~/.kube/config: kubectl configuration./etc/systemd/system/kubelet.service.d/: Kubelet configuration.
Roles and RBAC (Role-Based Access Control):
bash# Create Role kubectl create role pod-reader --verb=get,list,watch --resource=pods # Create RoleBinding kubectl create rolebinding read-pods --role=pod-reader --user=jane # Check permissions kubectl auth can-i list pods --as janeNamespaces:
bash
# Create namespace
kubectl create namespace my-namespace
# List namespaces
kubectl get namespaces
# Execute command in specific namespace
kubectl get pods -n my-namespace🧑🏫 Lesson 4: Kubernetes Objects and Workloads
What are Kubernetes Objects?
- Persistent entities in the Kubernetes system.
- Represent the state of the cluster.
- Described using YAML or JSON files.
Common Objects
- Pods: The smallest deployable units in Kubernetes.
- ReplicaSets: Maintain a stable set of replica Pods.
- Deployments: Manage ReplicaSets, support updates and rollbacks.
- Services: Define how to access Pods.
- ConfigMaps and Secrets: Configuration and sensitive data.
- Volumes: Storage for Pods.
- Namespaces: Divide cluster resources between multiple users.
Pod
- A group of one or more containers sharing storage and network.
- Common "sidecar" pattern: main container + helper container.
yaml
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19
ports:
- containerPort: 80
- name: log-sidecar
image: busybox
command: ["sh", "-c", "tail -f /var/log/nginx/access.log"]
volumeMounts:
- name: logs-volume
mountPath: /var/log/nginx
volumes:
- name: logs-volume
emptyDir: {}Deployment
- Higher level than Pod and ReplicaSet.
- Manages deployment and updates of applications.
- Supports rollouts and rollbacks.
yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19
ports:
- containerPort: 80Service
- Provides a stable endpoint to access Pods.
- Load balances between multiple Pods.
yaml
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: ClusterIPDaemonSet
- Ensures that all (or some) Nodes run a copy of a Pod.
- Typically used for logging, monitoring, storage daemons.
yaml
# daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: prometheus-node-exporter
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
containers:
- name: node-exporter
image: prom/node-exporter
ports:
- containerPort: 9100StatefulSet
- Manages stateful applications.
- Maintains a sticky identity for each of their Pods.
- Suitable for databases and stateful applications.
yaml
# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: "mongodb"
replicas: 3
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
ports:
- containerPort: 27017
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1GiJob and CronJob
- Job: Runs a Pod until completion.
- CronJob: Creates Jobs on a repeating schedule.
yaml
# cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup-database
spec:
schedule: "0 1 * * *" # Every day at 1 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: mybackup:1.0
command: ["/bin/sh", "-c", "backup.sh"]
restartPolicy: OnFailureManaging Kubernetes Objects
bash
# Create from YAML file
kubectl apply -f deployment.yaml
# Update image
kubectl set image deployment/nginx-deployment nginx=nginx:1.20
# Rollback
kubectl rollout undo deployment/nginx-deployment
# Scale
kubectl scale deployment/nginx-deployment --replicas=5
# Delete
kubectl delete deployment nginx-deployment🧑🏫 Lesson 5: Networking in Kubernetes
Kubernetes Networking Model
- Flat network: Pods communicate with each other without NAT.
- Each Pod has a unique IP.
- Containers in a Pod share the same IP.
Networking Components
- Pod Network: Communication between pods.
- Service Network: Access to pods via services.
- Cluster DNS: Service discovery.
- Ingress: Routing HTTP/HTTPS traffic from outside to the cluster.
Network Plugins (CNI)
- Calico: High performance, supports network policy.
- Flannel: Simple, easy to setup.
- Weave Net: Easy to use, encrypted.
- Cilium: Based on eBPF, high performance.
Service Types
ClusterIP: (default)
- Internal IP within the cluster.
- Only accessible from within the cluster.
yamlapiVersion: v1 kind: Service metadata: name: backend-service spec: selector: app: backend ports: - port: 80 targetPort: 8080 type: ClusterIPNodePort:
Opens a specific port on all nodes.
Accessible externally via
<NodeIP>:<NodePort>.yamlapiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web ports: - port: 80 targetPort: 8080 nodePort: 30080 # Port 30000-32767 type: NodePort
LoadBalancer:
- Uses the cloud provider's load balancer.
- Provides a public IP.
yamlapiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web ports: - port: 80 targetPort: 8080 type: LoadBalancerExternalName:
- Maps the service to a DNS name.
yaml
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: database.example.comIngress
- Layer 7 (HTTP) load balancer.
- Routes traffic based on URL path or hostname.
- Requires an Ingress Controller (nginx, traefik, ...).
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80Network Policies
- Control traffic flow between Pods.
- Similar to firewall rules.
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: backend
ports:
- protocol: TCP
port: 5432Debugging Network Issues
bash
# Check service
kubectl get svc my-service
# Debug DNS
kubectl run -i --tty --rm debug --image=busybox -- sh
# Inside debug container
nslookup my-service
# Check endpoints
kubectl get endpoints my-service
# View network policies
kubectl get networkpolicies🧑🏫 Lesson 6: Storage and Persistence
Persistent Storage in Kubernetes
- Data persists independently of the Pod lifecycle.
- Kubernetes abstraction for managing storage.
Volumes
- Storage attached to a Pod.
- Exists as long as the Pod exists.
Common Volume Types
- emptyDir: Temporary directory, deleted when Pod is removed.
- hostPath: Uses a path on the Node.
- PersistentVolume (PV): Storage independent of the Pod.
- ConfigMap/Secret as Volume: Mount configuration/secrets.
emptyDir
yaml
apiVersion: v1
kind: Pod
metadata:
name: cache-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: cache-volume
mountPath: /cache
volumes:
- name: cache-volume
emptyDir: {}hostPath
yaml
apiVersion: v1
kind: Pod
metadata:
name: log-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: log-volume
mountPath: /var/log/nginx
volumes:
- name: log-volume
hostPath:
path: /var/log/pods
type: DirectoryOrCreatePersistent Storage Architecture
- PersistentVolume (PV): Actual storage resource.
- PersistentVolumeClaim (PVC): Request for storage.
- StorageClass: Defines storage type and provisioner.
PersistentVolume (PV)
yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-storage
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/pv0001PersistentVolumeClaim (PVC)
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standardUsing PVC in a Pod
yaml
apiVersion: v1
kind: Pod
metadata:
name: db-pod
spec:
containers:
- name: db
image: postgres:13
volumeMounts:
- name: db-data
mountPath: /var/lib/postgresql/data
volumes:
- name: db-data
persistentVolumeClaim:
claimName: db-storage-claimStorageClass
- Provides dynamic storage provisioning.
- Integrates with cloud providers.
yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
reclaimPolicy: Delete
allowVolumeExpansion: trueVolume Snapshots
- Back up data from PVC.
- Restore from snapshot.
yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: db-snapshot
spec:
volumeSnapshotClassName: csi-hostpath-snapclass
source:
persistentVolumeClaimName: db-storage-claimStatefulSet with Storage
yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "standard"
resources:
requests:
storage: 10GiBest Practices
- Use PVs and PVCs instead of direct volumes.
- Define appropriate StorageClass for each workload type.
- Configure backup and disaster recovery.
- Use StatefulSets with volumeClaimTemplates for stateful apps.
- Monitor storage capacity and performance.
🧑🏫 Lesson 7: ConfigMaps and Secrets
ConfigMaps
- Stores configuration data as key-value pairs.
- Decouples configuration artifacts from image content.
Create ConfigMap
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database.host: "mysql"
database.port: "3306"
ui.theme: "dark"
config.json: |
{
"log_level": "info",
"debug": false,
"features": {
"billing": true,
"notifications": false
}
}Using ConfigMap
Environment variables:
yamlapiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx env: - name: DB_HOST valueFrom: configMapKeyRef: name: app-config key: database.host - name: DB_PORT valueFrom: configMapKeyRef: name: app-config key: database.portenvFrom - all keys as environment variables:
yamlapiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx envFrom: - configMapRef: name: app-configVolume mount:
yamlapiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: app-config
Secrets
- Stores sensitive information (passwords, tokens, keys).
- Similar to ConfigMap but safer.
- Base64 encoded (not strong encryption).
Create Secret
yaml
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
# Values must be base64 encoded
username: YWRtaW4= # admin
password: c2VjcmV0 # secretCreate Secret from command line
bash
# From file
kubectl create secret generic ssl-cert --from-file=cert.pem --from-file=key.pem
# From literal
kubectl create secret generic api-keys --from-literal=api_key=123456 --from-literal=secret_key=abcdefUsing Secret
Environment variables:
yamlapiVersion: v1 kind: Pod metadata: name: db-client spec: containers: - name: app image: myapp env: - name: DB_USERNAME valueFrom: secretKeyRef: name: db-credentials key: username - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: passwordVolume mount:
yamlapiVersion: v1 kind: Pod metadata: name: db-client spec: containers: - name: app image: myapp volumeMounts: - name: secret-volume mountPath: /etc/secrets readOnly: true volumes: - name: secret-volume secret: secretName: db-credentials
Secret Types
- Opaque: Default, arbitrary data.
- kubernetes.io/service-account-token: Service account token.
- kubernetes.io/dockerconfigjson: Docker registry auth.
- kubernetes.io/tls: TLS certificates.
Docker Registry Secret
yaml
apiVersion: v1
kind: Secret
metadata:
name: docker-registry-cred
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: <base64-encoded-docker-config>Using Docker Registry Secret
yaml
apiVersion: v1
kind: Pod
metadata:
name: private-app
spec:
containers:
- name: app
image: myprivate/repo:tag
imagePullSecrets:
- name: docker-registry-credBest Practices (Secrets Security)
- Do not store Secrets in git repositories.
- Restrict access to Secrets using RBAC.
- Use external solutions like Vault for secret management.
- Set network policies for Pods using Secrets.
- Encrypt etcd to protect stored Secrets.
🧑🏫 Lesson 8: Helm - Package Manager for Kubernetes
What is Helm?
- Package manager for Kubernetes.
- Helps define, install, and upgrade complex Kubernetes applications.
- Similar to npm, pip, or apt but for Kubernetes.
Basic Helm Concepts
- Chart: A Helm package, contains all resource definitions.
- Repository: Place where charts are collected and shared.
- Release: An instance of a chart running in a Kubernetes cluster.
Installing Helm
bash
# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# macOS
brew install helm
# Windows
choco install kubernetes-helmStructure of a Helm Chart
text
mychart/
Chart.yaml # Information about the chart
values.yaml # Default configuration values
templates/ # Directory of templates
deployment.yaml
service.yaml
ingress.yaml
_helpers.tpl # Partial templates
charts/ # Chart dependencies
templates/NOTES.txt # Notes displayed after installationChart.yaml
yaml
apiVersion: v2
name: myapp
version: 1.0.0
description: My Application Helm Chart
type: application
appVersion: "1.0.0"
dependencies:
- name: mysql
version: 8.8.5
repository: https://charts.bitnami.com/bitnamivalues.yaml
yaml
# Default values
replicaCount: 2
image:
repository: nginx
tag: 1.19.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: false
hosts:
- host: chart-example.local
paths: ["/"]Template file (deployment.yaml)
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: { { include "myapp.fullname" . } }
labels: { { - include "myapp.labels" . | nindent 4 } }
spec:
replicas: { { .Values.replicaCount } }
selector:
matchLabels: { { - include "myapp.selectorLabels" . | nindent 6 } }
template:
metadata:
labels: { { - include "myapp.selectorLabels" . | nindent 8 } }
spec:
containers:
- name: { { .Chart.Name } }
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: { { .Values.image.pullPolicy } }
ports:
- name: http
containerPort: 80
protocol: TCPHelm Commands
bash
# Search charts
helm search hub wordpress
# Add repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Install chart
helm install my-release bitnami/wordpress
# List releases
helm list
# Check status
helm status my-release
# Upgrade release
helm upgrade my-release bitnami/wordpress --values=custom-values.yaml
# Rollback
helm rollback my-release 1
# Uninstall
helm uninstall my-releaseCreate new Helm Chart
bash
# Create new chart
helm create mychart
# Lint chart
helm lint mychart
# Package chart
helm package mychart
# Install local chart
helm install my-app ./mychart
# Install with custom values
helm install my-app ./mychart -f my-values.yamlHelm Template Functions
yaml
# Quote
app: {{ .Values.appName | quote }}
# Default
replicas: {{ .Values.replicas | default 1 }}
# Indent
data:
{{- .Values.configuration | nindent 2 }}
# toYaml
labels:
{{- toYaml .Values.labels | nindent 4 }}
# if/else
{{- if .Values.ingress.enabled }}
# ingress configuration
{{- end }}Chart Hooks
pre-install,post-installpre-delete,post-deletepre-upgrade,post-upgradepre-rollback,post-rollbacktest
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "mychart.fullname" . }}-db-init
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: db-init
image: postgres
command: ["psql", "--command", "CREATE DATABASE app"]
restartPolicy: NeverBest Practices (Helm Tips)
- Use Helm repo to manage charts.
- Split values.yaml into logical components.
- Experiment with templates in values.yaml.
- Use helpers to reuse code.
- Add NOTES.txt to guide users.
🧪 FINAL PROJECT: Building and Deploying a Microservices App on Kubernetes
Project Description
Build a complete microservices system and deploy it on a Kubernetes cluster, with the following components:
- Frontend SPA (Single Page Application).
- API Gateway.
- 2-3 Backend Microservices.
- Database (SQL or NoSQL).
- Authentication/Authorization system.
Requirements
- Build Docker images for each microservice.
- Create Kubernetes manifests for all components.
- Configure Services, Ingress to manage traffic.
- Setup PersistentVolumes for database.
- Configure ConfigMaps and Secrets.
- Deploy Prometheus and Grafana for monitoring.
- Configure Horizontal Pod Autoscaler.
- Create Helm chart for the entire application.
- Write scripts for CI/CD pipeline.
Expected Outcome
- Application runs stably on Kubernetes.
- Detailed deployment documentation and system architecture.
- Automatic scaling capability based on load.
- Complete monitoring and alerting.
- CI/CD pipeline for application updates.
