Docker Deployment Guide: From Development to Production
Docker has revolutionized how we develop, ship, and run applications. By containerizing applications, Docker provides consistency across development, testing, and production environments while simplifying deployment processes. This comprehensive guide covers everything from Docker basics to production-ready deployment strategies.
Introduction to Docker
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers. These containers can run consistently across different environments, solving the "it works on my machine" problem that has plagued developers for decades.
Key Benefits of Docker
Consistency Across Environments - Applications run identically in development, staging, and production - Eliminates environment-related bugs and deployment issues - Standardizes development workflows across team members
Resource Efficiency - Containers share the host OS kernel, using fewer resources than virtual machines - Faster startup times compared to traditional virtualization - Higher density deployment on physical hardware
Simplified Deployment - Package applications with all dependencies included - Easy rollback capabilities with versioned images - Streamlined CI/CD pipeline integration
Scalability and Orchestration - Easy horizontal scaling with container orchestration - Load balancing and service discovery built-in - Auto-healing and self-recovery capabilities
Docker Fundamentals
Core Concepts
Images - Read-only templates used to create containers - Built using Dockerfile instructions - Stored in registries like Docker Hub or private repositories - Versioned using tags for different releases
Containers - Running instances of Docker images - Isolated processes with their own filesystem, network, and resources - Stateless by design, with data stored in volumes - Can be started, stopped, and deleted without affecting the host
Dockerfile - Text file containing instructions to build Docker images - Defines the base image, dependencies, and configuration - Enables reproducible image builds - Version controlled alongside application code
Volumes - Persistent data storage for containers - Survives container lifecycle (creation, deletion, recreation) - Can be shared between multiple containers - Managed by Docker or mapped to host directories
Basic Docker Commands
# Image management
docker build -t myapp:latest .
docker pull nginx:alpine
docker push myregistry/myapp:v1.0
docker images
docker rmi image_id
# Container operations
docker run -d --name mycontainer myapp:latest
docker ps
docker stop container_id
docker start container_id
docker logs container_id
docker exec -it container_id /bin/bash
# System maintenance
docker system prune
docker volume prune
docker network prune
Creating Docker Images
Writing Effective Dockerfiles
Best Practices for Dockerfile Creation:
# Use specific, minimal base images
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files first (better layer caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership of app directory
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["npm", "start"]
Multi-Stage Builds
Reduce image size and improve security with multi-stage builds:
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
USER node
CMD ["npm", "start"]
Image Optimization Strategies
Layer Optimization - Order instructions from least to most likely to change - Combine RUN commands to reduce layers - Use .dockerignore to exclude unnecessary files - Clean up package caches in the same RUN command
Security Hardening - Use minimal base images (Alpine Linux) - Run containers as non-root users - Scan images for vulnerabilities regularly - Keep base images updated
Size Reduction Techniques - Multi-stage builds for smaller production images - Remove development dependencies in production - Use specific package versions to avoid bloat - Compress and optimize assets during build
Docker Compose for Multi-Container Applications
Basic Docker Compose Setup
Docker Compose simplifies multi-container application deployment:
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://user:password@db:5432/myapp
depends_on:
- db
- redis
volumes:
- ./uploads:/app/uploads
restart: unless-stopped
db:
image: postgres:14-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- web
restart: unless-stopped
volumes:
postgres_data:
redis_data:
networks:
default:
driver: bridge
Advanced Compose Features
Environment Configuration
services:
web:
build: .
env_file:
- .env.production
environment:
- DEBUG=${DEBUG:-false}
- API_KEY=${API_KEY}
Health Checks
services:
web:
build: .
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Resource Limits
services:
web:
build: .
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
Production Deployment Strategies
Container Orchestration Options
Docker Swarm - Native Docker clustering solution - Simpler setup compared to Kubernetes - Good for smaller deployments - Built-in load balancing and service discovery
Kubernetes - Industry-standard container orchestration - Advanced scheduling and scaling capabilities - Extensive ecosystem and community support - Complex but powerful for large-scale deployments
Managed Services - AWS ECS/EKS, Google GKE, Azure AKS - Reduced operational overhead - Integration with cloud provider services - Automatic updates and maintenance
Deployment Patterns
Blue-Green Deployment
# Deploy new version alongside old
docker service create --name app-green myapp:v2.0
# Switch traffic to new version
docker service update --publish-rm 80:80 app-blue
docker service update --publish-add 80:80 app-green
# Remove old version after verification
docker service rm app-blue
Rolling Updates
# Gradual replacement of containers
docker service update --image myapp:v2.0 myapp-service
Canary Deployments
# Deploy small percentage to new version
docker service create --name app-canary --replicas 1 myapp:v2.0
# Monitor metrics and gradually increase traffic
Production Configuration
Docker Daemon Configuration
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"userland-proxy": false,
"experimental": false,
"live-restore": true
}
Resource Management
# Set memory and CPU limits
docker run -d \
--memory="1g" \
--cpus="1.0" \
--restart=unless-stopped \
myapp:latest
Security Best Practices
Container Security
Image Security - Use official, minimal base images - Regularly update base images - Scan images for vulnerabilities - Sign images for authenticity
Runtime Security - Run containers as non-root users - Use read-only filesystems where possible - Limit container capabilities - Implement network segmentation
Secrets Management
# Docker Swarm secrets
echo "mypassword" | docker secret create db_password -
# Use in service
docker service create \
--secret db_password \
--env DB_PASSWORD_FILE=/run/secrets/db_password \
myapp:latest
Network Security
Network Isolation
services:
web:
networks:
- frontend
db:
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
TLS Configuration
services:
nginx:
volumes:
- ./ssl:/etc/nginx/ssl:ro
environment:
- SSL_CERT_PATH=/etc/nginx/ssl/cert.pem
- SSL_KEY_PATH=/etc/nginx/ssl/key.pem
Monitoring and Logging
Container Monitoring
Prometheus + Grafana Stack
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
node-exporter:
image: prom/node-exporter
ports:
- "9100:9100"
Health Monitoring
# Container health status
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Image}}"
# Resource usage
docker stats --no-stream
# System events
docker events --filter container=myapp
Centralized Logging
ELK Stack Configuration
services:
elasticsearch:
image: elasticsearch:7.14.0
environment:
- discovery.type=single-node
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
logstash:
image: logstash:7.14.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: kibana:7.14.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
Application Logging
# Configure logging in Dockerfile
ENV PYTHONUNBUFFERED=1
# Configure logging driver in compose
services:
app:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Scaling with Docker Swarm
Swarm Initialization
# Initialize swarm on manager node
docker swarm init --advertise-addr <manager-ip>
# Join worker nodes
docker swarm join --token <worker-token> <manager-ip>:2377
# Deploy stack
docker stack deploy -c docker-compose.yml myapp
Service Scaling
# Scale service replicas
docker service scale myapp_web=5
# Update service image
docker service update --image myapp:v2.0 myapp_web
# Rolling update with constraints
docker service update \
--update-parallelism 2 \
--update-delay 10s \
--image myapp:v2.0 \
myapp_web
Load Balancing
services:
web:
image: myapp:latest
deploy:
replicas: 3
placement:
constraints:
- node.role == worker
networks:
- frontend
nginx:
image: nginx:alpine
ports:
- "80:80"
configs:
- source: nginx_config
target: /etc/nginx/nginx.conf
Troubleshooting Common Issues
Container Debugging
Inspect Container State
# Detailed container information
docker inspect container_id
# Container logs
docker logs --follow --timestamps container_id
# Execute commands in running container
docker exec -it container_id /bin/sh
# Copy files from container
docker cp container_id:/app/logs ./local_logs
Network Debugging
# List networks
docker network ls
# Inspect network
docker network inspect bridge
# Test connectivity
docker run --rm --net container:myapp nicolaka/netshoot
# Port mapping issues
docker port container_id
Performance Issues
Resource Monitoring
# Real-time resource usage
docker stats
# Historical usage (with Prometheus)
docker run -d \
--name node-exporter \
-p 9100:9100 \
prom/node-exporter
Storage Issues
# Check disk usage
docker system df
# Clean up unused resources
docker system prune -a
# Volume management
docker volume ls
docker volume prune
Build Problems
Layer Caching Issues
# Force rebuild without cache
docker build --no-cache -t myapp:latest .
# Check build context size
du -sh .
# Optimize with .dockerignore
echo "node_modules" >> .dockerignore
echo ".git" >> .dockerignore
echo "*.log" >> .dockerignore
Dependency Problems
# Pin specific versions
FROM node:18.16.0-alpine
# Use lock files
COPY package-lock.json ./
RUN npm ci
CI/CD Integration
GitLab CI Example
stages:
- build
- test
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
build:
stage: build
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
test:
stage: test
script:
- docker run --rm $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA npm test
deploy:
stage: deploy
script:
- docker service update --image $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA myapp_web
only:
- main
GitHub Actions Example
name: Docker Build and Deploy
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker image
run: |
docker build -t myapp:${{ github.sha }} .
- name: Run tests
run: |
docker run --rm myapp:${{ github.sha }} npm test
- name: Deploy to production
run: |
docker service update --image myapp:${{ github.sha }} myapp_web
Conclusion
Docker has fundamentally changed how we approach application deployment and infrastructure management. By containerizing applications, teams can achieve consistency across environments, improve resource utilization, and simplify deployment processes.
Key takeaways for successful Docker adoption:
- Start simple with basic containerization before moving to orchestration
- Focus on security from the beginning with proper image and runtime practices
- Implement monitoring early to understand application behavior in containers
- Use appropriate orchestration based on your scale and complexity needs
- Automate deployment processes to reduce errors and improve reliability
The containerization ecosystem continues to evolve rapidly, with new tools and best practices emerging regularly. Staying current with Docker developments and community best practices ensures you can leverage the full benefits of containerization for your applications.
Whether you're deploying a simple web application or a complex microservices architecture, Docker provides the foundation for reliable, scalable, and maintainable deployment practices.
Ready to containerize your applications? Contact me for personalized Docker training and implementation strategies tailored to your specific deployment needs and infrastructure requirements.