App Deployment
Learn how to deploy your applications to various platforms and environments with best practices for production deployment.
Overview
Application deployment is the process of making your application available to users in a production environment. This tutorial covers various deployment strategies, platforms, and best practices.
Deployment Strategies
Blue-Green Deployment
Deploy to a parallel environment and switch traffic:
yaml
# docker-compose.yml
version: '3.8'
services:
app-blue:
image: myapp:v1.0
ports:
- "3000:3000"
environment:
- NODE_ENV=production
app-green:
image: myapp:v1.1
ports:
- "3001:3000"
environment:
- NODE_ENV=production
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
Rolling Deployment
Gradually replace instances:
yaml
# kubernetes-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
Canary Deployment
Test with a small percentage of traffic:
yaml
# canary-service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp-canary
spec:
selector:
app: myapp
version: canary
ports:
- port: 80
targetPort: 3000
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: myapp-canary
- route:
- destination:
host: myapp-stable
weight: 90
- destination:
host: myapp-canary
weight: 10
Cloud Platforms
AWS Deployment
Elastic Beanstalk
json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "myapp",
"image": "myapp:latest",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
]
}
]
}
ECS with Fargate
json
{
"family": "myapp-task",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::account:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "myapp",
"image": "myapp:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/myapp",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}
Google Cloud Platform
App Engine
yaml
# app.yaml
runtime: nodejs16
service: default
env_variables:
NODE_ENV: production
DATABASE_URL: ${DATABASE_URL}
automatic_scaling:
min_instances: 1
max_instances: 10
target_cpu_utilization: 0.6
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Cloud Run
yaml
# cloudrun.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: myapp
annotations:
run.googleapis.com/ingress: all
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "10"
run.googleapis.com/cpu-throttling: "false"
spec:
containerConcurrency: 80
containers:
- image: gcr.io/project-id/myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
resources:
limits:
cpu: 1000m
memory: 512Mi
Microsoft Azure
App Service
yaml
# azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
azureSubscription: 'your-subscription'
appName: 'your-app-name'
resourceGroupName: 'your-resource-group'
stages:
- stage: Build
jobs:
- job: Build
steps:
- task: NodeTool@0
inputs:
versionSpec: '16.x'
- script: |
npm install
npm run build
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
- task: PublishBuildArtifacts@1
- stage: Deploy
jobs:
- deployment: Deploy
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: '$(azureSubscription)'
appType: 'webAppLinux'
appName: '$(appName)'
package: '$(Pipeline.Workspace)/**/*.zip'
Container Deployment
Docker
dockerfile
# Multi-stage Dockerfile
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:16-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["npm", "start"]
Docker Compose
yaml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
depends_on:
- db
- redis
restart: unless-stopped
db:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:6-alpine
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- app
restart: unless-stopped
volumes:
postgres_data:
Kubernetes
yaml
# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
CI/CD Pipelines
GitHub Actions
yaml
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '16'
cache: 'npm'
- run: npm ci
- run: npm test
- run: npm run lint
build:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '16'
cache: 'npm'
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v3
with:
name: build-files
path: dist/
deploy:
needs: build
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v3
- uses: actions/download-artifact@v3
with:
name: build-files
path: dist/
- name: Deploy to AWS
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 sync dist/ s3://my-app-bucket --delete
aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_ID }} --paths "/*"
GitLab CI/CD
yaml
# .gitlab-ci.yml
stages:
- test
- build
- deploy
variables:
DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
test:
stage: test
image: node:16
script:
- npm ci
- npm test
- npm run lint
coverage: '/Lines\s*:\s*(\d+\.\d+)%/'
build:
stage: build
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $DOCKER_IMAGE .
- docker push $DOCKER_IMAGE
only:
- main
deploy:
stage: deploy
image: alpine/helm:latest
script:
- helm upgrade --install myapp ./helm-chart
--set image.tag=$CI_COMMIT_SHA
--set ingress.host=$PRODUCTION_HOST
environment:
name: production
url: https://$PRODUCTION_HOST
only:
- main
Environment Configuration
Environment Variables
bash
# .env.production
NODE_ENV=production
PORT=3000
DATABASE_URL=postgresql://user:pass@localhost:5432/myapp
REDIS_URL=redis://localhost:6379
JWT_SECRET=your-jwt-secret
API_BASE_URL=https://api.myapp.com
CDN_URL=https://cdn.myapp.com
Configuration Management
javascript
// config/production.js
module.exports = {
database: {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
name: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
ssl: true,
pool: {
min: 2,
max: 10
}
},
redis: {
url: process.env.REDIS_URL,
retryDelayOnFailover: 100,
maxRetriesPerRequest: 3
},
logging: {
level: 'info',
format: 'json'
},
security: {
cors: {
origin: process.env.ALLOWED_ORIGINS?.split(',') || [],
credentials: true
},
rateLimit: {
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // limit each IP to 100 requests per windowMs
}
}
}
Monitoring and Health Checks
Health Check Endpoints
javascript
// routes/health.js
const express = require('express')
const router = express.Router()
router.get('/health', async (req, res) => {
try {
// Check database connection
await db.query('SELECT 1')
// Check Redis connection
await redis.ping()
res.status(200).json({
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: process.memoryUsage(),
version: process.env.npm_package_version
})
} catch (error) {
res.status(503).json({
status: 'unhealthy',
error: error.message,
timestamp: new Date().toISOString()
})
}
})
router.get('/ready', async (req, res) => {
// Check if app is ready to receive traffic
if (app.isReady) {
res.status(200).json({ status: 'ready' })
} else {
res.status(503).json({ status: 'not ready' })
}
})
module.exports = router
Application Metrics
javascript
// middleware/metrics.js
const prometheus = require('prom-client')
// Create metrics
const httpRequestDuration = new prometheus.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code']
})
const httpRequestTotal = new prometheus.Counter({
name: 'http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'route', 'status_code']
})
module.exports = (req, res, next) => {
const start = Date.now()
res.on('finish', () => {
const duration = (Date.now() - start) / 1000
const labels = {
method: req.method,
route: req.route?.path || req.path,
status_code: res.statusCode
}
httpRequestDuration.observe(labels, duration)
httpRequestTotal.inc(labels)
})
next()
}
Security Best Practices
SSL/TLS Configuration
nginx
# nginx.conf
server {
listen 443 ssl http2;
server_name myapp.com;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512;
ssl_prefer_server_ciphers off;
add_header Strict-Transport-Security "max-age=63072000" always;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
location / {
proxy_pass http://app:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Secrets Management
yaml
# kubernetes-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
database-url: <base64-encoded-url>
jwt-secret: <base64-encoded-secret>
api-key: <base64-encoded-key>
Rollback Strategies
Database Migrations
javascript
// migrations/rollback.js
const rollbackStrategies = {
addColumn: (table, column) => `ALTER TABLE ${table} DROP COLUMN ${column}`,
dropColumn: (table, column, definition) => `ALTER TABLE ${table} ADD COLUMN ${column} ${definition}`,
createTable: (table) => `DROP TABLE ${table}`,
dropTable: (table, schema) => `CREATE TABLE ${table} (${schema})`
}
async function rollback(migrationId) {
const migration = await getMigration(migrationId)
const rollbackSQL = generateRollbackSQL(migration)
await db.query(rollbackSQL)
}
Application Rollback
bash
#!/bin/bash
# rollback.sh
PREVIOUS_VERSION=$1
if [ -z "$PREVIOUS_VERSION" ]; then
echo "Usage: $0 <previous-version>"
exit 1
fi
echo "Rolling back to version $PREVIOUS_VERSION"
# Update container image
kubectl set image deployment/myapp myapp=myapp:$PREVIOUS_VERSION
# Wait for rollout
kubectl rollout status deployment/myapp
# Verify deployment
kubectl get pods -l app=myapp
echo "Rollback completed"
Performance Optimization
Load Balancing
yaml
# load-balancer.yml
apiVersion: v1
kind: Service
metadata:
name: myapp-lb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 80
targetPort: 3000
protocol: TCP
Auto Scaling
yaml
# hpa.yml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Troubleshooting
Common Issues
- Port conflicts: Ensure ports are available
- Environment variables: Verify all required variables are set
- Database connections: Check connection strings and credentials
- SSL certificates: Ensure certificates are valid and properly configured
- Resource limits: Monitor CPU and memory usage
Debugging Tools
bash
# Check application logs
kubectl logs -f deployment/myapp
# Debug networking
kubectl exec -it pod/myapp-xxx -- netstat -tlnp
# Check resource usage
kubectl top pods
# Describe deployment
kubectl describe deployment myapp
This tutorial provides a comprehensive guide to application deployment. Choose the strategies and platforms that best fit your application's requirements and constraints.