Docker is an open platform designed to automate the processes of building, sharing, and running applications. Its key value is to provide a consistent environment where applications work reliably regardless of the OS or platform.
Where it's used
Primarily in software development, and often in DevOps practices and systems administration.
Kubernetes is also an open-source platform, but it's designed specifically for automating containerized workloads. It provides mechanisms for deploying, scaling, and maintaining applications.
Kubernetes cluster architecture
A Kubernetes cluster includes two types of nodes:
- Control Plane Nodes — previously known as Master nodes.
- Worker Nodes — the nodes that run your application workloads.
The Control Plane coordinates the cluster. It manages the state of the system, scheduling, and health checks. Technically, Control Plane nodes can run workloads, but it's not recommended: if malicious code ends up on such a node, the consequences can be more serious than on a Worker Node.
Did you know?
Kubernetes originated at Google and was inspired by their internal system called Borg. The name "Kubernetes" comes from the Greek word κυβερνήτης (kybernḗtēs), meaning "helmsman" — a nod to Star Trek's Borg.
Containers: what are they?
Containerization means packaging an app in a lightweight, isolated environment that's easy to move across systems.
Key differences from virtual machines:
- VMs emulate a complete system including kernel and OS; containers use the host OS kernel and isolate processes and resources.
- VMs require hypervisors and consume more resources.
- VMs are more flexible, allowing multiple OS types.
- Containers are faster, lighter, and task-specific.
For security, container isolation is weaker than VMs. However, tools like:
seccomp
profilesAppArmor
/SELinux
- Pod Security Standards
- Network Policies
...can enhance safety — enough for most use cases.
Why use Kubernetes for container management
- Abstraction from hardware: container builds work the same across environments.
- OS/cloud agnostic: supports Ubuntu, RHEL, GKE, EKS, AKS, etc.
- Declarative config: define desired state and let Kubernetes handle it.
- Efficient resource allocation: fine-tune CPU and memory per container.
- Auto-healing: restarts crashed containers and reschedules pods on node failure.
Preparing a Docker image
Install Docker on your machine first. Instructions by OS are available at https://docs.docker.com/get-docker/
Key Concepts:
- Images — immutable templates with code, dependencies, and configs.
- Containers — runtime instances of images.
Where to Host Docker Containers
After building and pushing your image, you’ll need infrastructure to run it. Look for reliability, performance, and flexibility.
One great option is PSB.Hosting — a provider offering:
- Powerful VPS with AMD Ryzen CPUs and NVMe storage
- Full support for Linux distros and Docker environments
- Quick setup and scalable tariffs
Perfect for both small Kubernetes clusters and CI/CD pipelines when you need full control without cloud vendor lock-in.
Writing a Dockerfile
FROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "app.js"]
This does the following:
- Uses Node.js with Alpine Linux
- Sets
/app
as the working dir - Installs dependencies
- Copies the app files
- Opens port 3000
- Runs the app
Building & Publishing Docker Image
# Build docker build -t myapp:1.0 . # Tag docker tag myapp:1.0 username/myapp:1.0 # Push docker push username/myapp:1.0
Core Kubernetes Objects
- Pod — smallest deployable unit, holds one or more containers
- Deployment — manages identical pods and updates
- Service — exposes pods via stable IP/DNS
- ConfigMap/Secret — config data
- Ingress — external HTTP/S access
Deploying the App in Kubernetes
1. Create Deployment
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: username/myapp:1.0 ports: - containerPort: 3000 resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "500m"
2. Create Service
service.yaml
apiVersion: v1 kind: Service metadata: name: myapp spec: selector: app: myapp ports: - port: 80 targetPort: 3000 type: ClusterIP
3. Optional: Ingress
ingress.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ingress spec: rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: myapp port: number: 80
4. Apply Configuration
# Check context kubectl config current-context # Optional: create namespace kubectl create namespace myapp # Apply kubectl apply -f deployment.yaml -n myapp kubectl apply -f service.yaml -n myapp kubectl apply -f ingress.yaml -n myapp # Check status kubectl get pods -n myapp kubectl get services -n myapp kubectl get ingress -n myapp
Scaling & Updating the App
Scaling
kubectl scale deployment myapp -n myapp --replicas=5
Updating
kubectl set image deployment/myapp myapp=username/myapp:2.0 -n myapp kubectl rollout status deployment/myapp -n myapp
Rollback
kubectl rollout undo deployment/myapp -n myapp
Monitoring & Debugging
Logs
kubectl get pods -n myapp kubectl logs pod/myapp-abc123 -n myapp
Shell Access
kubectl exec -it pod/myapp-abc123 -n myapp -- /bin/bash
Resource Usage
kubectl top pods -n myapp
Conclusion
Deploying Docker containers with Kubernetes may seem complex at first, but the benefits — scalability, reliability, and fine-grained control — make it worth the learning curve. By following this guide, you’ll be able to build, run, and manage your app lifecycle efficiently.
For further reading, visit the official Kubernetes documentation