
前言 #
在生产环境中,我们通常需要管理多个部署环境:
- 开发环境(dev) - 开发者本地调试
- 测试环境(test) - QA 测试验证
- 预发布环境(staging) - 上线前最后验证
- 生产环境(prod) - 正式对外服务
每个环境都有不同的配置、资源限制和部署策略。本文将带你从零搭建一套完整的多环境部署体系,覆盖从单机 Docker Compose 到 Kubernetes 集群的完整方案。
环境规划 #
环境对比 #
| 环境 | 用途 | 资源 | 部署方式 | 更新频率 |
|---|---|---|---|---|
| dev | 本地开发调试 | 单机 | Docker Compose | 随时 |
| test | 自动化测试 | 2-3 节点 | Docker Swarm | 每日多次 |
| staging | 上线前验证 | 与 prod 一致 | Kubernetes | 每次发布前 |
| prod | 正式服务 | 多节点集群 | Kubernetes | 按需发布 |
配置差异 #
# 各环境配置对比
环境 副本数 CPU/内存 数据库 日志级别
dev 1 0.5C/512M SQLite DEBUG
test 2 1C/1G MySQL INFO
staging 2 2C/4G MySQL 主从 INFO
prod 3+ 4C/8G+ MySQL 集群 WARN一、单机部署:Docker Compose 方案 #
1.1 项目结构 #
myapp/
├── docker/
│ ├── Dockerfile # 应用镜像
│ └── nginx/
│ └── nginx.conf # Nginx 配置
├── docker-compose.yml # 基础配置
├── docker-compose.dev.yml # 开发环境
├── docker-compose.test.yml # 测试环境
├── docker-compose.prod.yml # 生产环境
├── .env.dev # 开发环境变量
├── .env.test # 测试环境变量
├── .env.prod # 生产环境变量
└── app/
└── ... # 应用代码1.2 Dockerfile 编写 #
# docker/Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# 生产镜像
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
ENV NODE_ENV=production
EXPOSE 3000
USER node
CMD ["node", "dist/index.js"]1.3 基础 docker-compose.yml #
version: '3.8'
services:
app:
build:
context: .
dockerfile: docker/Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=${NODE_ENV:-development}
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
depends_on:
- db
- redis
networks:
- app-network
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
db:
image: postgres:14-alpine
environment:
- POSTGRES_DB=${DB_NAME:-myapp}
- POSTGRES_USER=${DB_USER:-myapp}
- POSTGRES_PASSWORD=${DB_PASSWORD:-secret}
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:7-alpine
networks:
- app-network
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- app
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres-data:1.4 环境差异化配置 #
开发环境(docker-compose.dev.yml):
version: '3.8'
services:
app:
build:
target: builder # 使用开发镜像
volumes:
- ./app:/app/src # 代码热重载
- /app/node_modules
environment:
- DEBUG=true
- LOG_LEVEL=debug
command: npm run dev # 开发模式启动
db:
ports:
- "5432:5432" # 暴露数据库端口生产环境(docker-compose.prod.yml):
version: '3.8'
services:
app:
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
environment:
- NODE_ENV=production
- LOG_LEVEL=warn
logging:
driver: json-file
options:
max-size: "100m"
max-file: "3"
nginx:
ports:
- "80:80"
- "443:443"
volumes:
- ./ssl:/etc/nginx/ssl:ro # SSL 证书1.5 环境变量管理 #
.env.dev:
NODE_ENV=development
DATABASE_URL=postgresql://myapp:secret@db:5432/myapp
REDIS_URL=redis://redis:6379
LOG_LEVEL=debug
DEBUG=true.env.prod:
NODE_ENV=production
DATABASE_URL=postgresql://myapp:${DB_PASSWORD}@prod-db:5432/myapp
REDIS_URL=redis://prod-redis:6379
LOG_LEVEL=warn
DB_PASSWORD=your-secure-password-here1.6 部署命令 #
# 开发环境
docker-compose -f docker-compose.yml -f docker-compose.dev.yml --env-file .env.dev up -d
# 测试环境
docker-compose -f docker-compose.yml -f docker-compose.test.yml --env-file .env.test up -d
# 生产环境
docker-compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.prod up -d
# 查看状态
docker-compose ps
# 查看日志
docker-compose logs -f app
# 重启服务
docker-compose restart app
# 清理资源
docker-compose down -v二、Kubernetes 集群部署 #
2.1 集群规划 #
┌─────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
├─────────────────────────────────────────────────────┤
│ Namespace: dev │ Namespace: test │
│ ┌─────────────┐ │ ┌─────────────┐ │
│ │ app (1 副本) │ │ │ app (2 副本) │ │
│ │ db (SQLite) │ │ │ db (MySQL) │ │
│ └─────────────┘ │ └─────────────┘ │
├─────────────────────────────────────────────────────┤
│ Namespace: staging │ Namespace: prod │
│ ┌─────────────┐ │ ┌─────────────┐ │
│ │ app (2 副本) │ │ │ app (3+ 副本) │ │
│ │ db (主从) │ │ │ db (集群) │ │
│ └─────────────┘ │ └─────────────┘ │
└─────────────────────────────────────────────────────┘2.2 Kubernetes 资源文件结构 #
k8s/
├── base/ # 基础配置(所有环境共用)
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── pvc.yaml
├── overlays/ # 环境差异化配置
│ ├── dev/
│ │ └── kustomization.yaml
│ ├── test/
│ │ └── kustomization.yaml
│ ├── staging/
│ │ └── kustomization.yaml
│ └── prod/
│ └── kustomization.yaml
└── scripts/
└── deploy.sh2.3 基础 Deployment #
# k8s/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: myapp-config
key: NODE_ENV
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secret
key: DATABASE_URL
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 52.4 Service 配置 #
# k8s/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
selector:
app: myapp2.5 ConfigMap 和 Secret #
# k8s/base/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
data:
NODE_ENV: "production"
LOG_LEVEL: "info"
API_TIMEOUT: "30000"# k8s/base/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: myapp-secret
type: Opaque
stringData:
DATABASE_URL: "postgresql://user:password@db:5432/myapp"
REDIS_URL: "redis://redis:6379"
JWT_SECRET: "your-jwt-secret-here"2.6 环境差异化配置(Kustomize) #
开发环境(k8s/overlays/dev/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: dev
namePrefix: dev-
replicas:
- name: myapp
count: 1
patchesStrategicMerge:
- deployment-patch.yaml
configMapGenerator:
- name: myapp-config
behavior: merge
literals:
- NODE_ENV=development
- LOG_LEVEL=debug开发环境补丁(k8s/overlays/dev/deployment-patch.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: app
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"生产环境(k8s/overlays/prod/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: prod
namePrefix: prod-
replicas:
- name: myapp
count: 3
patchesStrategicMerge:
- deployment-patch.yaml
- hpa.yaml
configMapGenerator:
- name: myapp-config
behavior: merge
literals:
- NODE_ENV=production
- LOG_LEVEL=warn生产环境补丁(k8s/overlays/prod/deployment-patch.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: myapp
topologyKey: kubernetes.io/hostname
containers:
- name: app
resources:
requests:
cpu: "2000m"
memory: "4Gi"
limits:
cpu: "4000m"
memory: "8Gi"2.7 水平自动扩缩容(HPA) #
# k8s/overlays/prod/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 802.8 Ingress 配置 #
# k8s/overlays/prod/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80三、自动化部署流水线 #
3.1 GitLab CI/CD 配置 #
# .gitlab-ci.yml
stages:
- build
- test
- deploy
variables:
DOCKER_REGISTRY: registry.example.com
IMAGE_NAME: myapp
# 构建镜像
build:
stage: build
script:
- docker build -t ${DOCKER_REGISTRY}/${IMAGE_NAME}:${CI_COMMIT_SHA} .
- docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:${CI_COMMIT_SHA}
- docker tag ${DOCKER_REGISTRY}/${IMAGE_NAME}:${CI_COMMIT_SHA} ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest
- docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest
only:
- main
- develop
# 测试环境部署
deploy-test:
stage: deploy
script:
- kubectl config use-context test-cluster
- kubectl apply -k k8s/overlays/test/
- kubectl rollout status deployment/test-myapp
environment:
name: test
url: https://test.myapp.example.com
only:
- develop
# 预发布环境部署
deploy-staging:
stage: deploy
script:
- kubectl config use-context staging-cluster
- kubectl apply -k k8s/overlays/staging/
- kubectl rollout status deployment/staging-myapp
environment:
name: staging
url: https://staging.myapp.example.com
only:
- main
when: manual
# 生产环境部署
deploy-prod:
stage: deploy
script:
- kubectl config use-context prod-cluster
- kubectl apply -k k8s/overlays/prod/
- kubectl rollout status deployment/prod-myapp
environment:
name: production
url: https://myapp.example.com
only:
- main
when: manual
needs:
- deploy-staging3.2 部署脚本 #
#!/bin/bash
# k8s/scripts/deploy.sh
set -e
ENV=${1:-dev}
NAMESPACE=$ENV
echo "🚀 开始部署到 ${ENV} 环境..."
# 选择集群上下文
case $ENV in
dev)
CONTEXT="dev-cluster"
;;
test)
CONTEXT="test-cluster"
;;
staging)
CONTEXT="staging-cluster"
;;
prod)
CONTEXT="prod-cluster"
;;
*)
echo "❌ 未知环境:$ENV"
exit 1
;;
esac
# 切换集群
echo "📡 切换到集群:$CONTEXT"
kubectl config use-context $CONTEXT
# 创建命名空间(如果不存在)
echo "📦 检查命名空间:$NAMESPACE"
kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# 应用配置
echo "⚙️ 应用 Kubernetes 配置..."
kubectl apply -k k8s/overlays/${ENV}/ -n $NAMESPACE
# 等待部署完成
echo "⏳ 等待部署完成..."
kubectl rollout status deployment/${ENV}-myapp -n $NAMESPACE
# 检查服务状态
echo "🔍 检查服务状态..."
kubectl get pods -n $NAMESPACE -l app=myapp
kubectl get svc -n $NAMESPACE -l app=myapp
echo "✅ 部署完成!"
echo ""
echo "访问地址:"
case $ENV in
dev)
echo " http://localhost:3000"
;;
test)
echo " https://test.myapp.example.com"
;;
staging)
echo " https://staging.myapp.example.com"
;;
prod)
echo " https://myapp.example.com"
;;
esac四、配置管理最佳实践 #
4.1 敏感信息管理 #
❌ 不要这样做:
# 错误示例:明文密码
env:
- name: DB_PASSWORD
value: "super-secret-password"✅ 正确做法:
# 使用 SealedSecrets(推荐)
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
kubectl apply -f sealed-secret.yaml# 或使用外部密钥管理
kubectl create secret generic myapp-secret \
--from-literal=DATABASE_URL="postgresql://..." \
--from-literal=JWT_SECRET="..." \
-n prod4.2 配置版本控制 #
# 使用 ConfigMap 版本
kubectl create configmap myapp-config-v1 \
--from-literal=VERSION=1.0.0 \
--from-literal=FEATURE_FLAG_A=true4.3 环境变量继承 #
# 使用 Kustomize 的 configMapGenerator
configMapGenerator:
- name: myapp-config
behavior: merge # 合并而非替换
literals:
- NEW_FEATURE=enabled五、监控与日志 #
5.1 Prometheus 监控 #
# k8s/base/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: http
path: /metrics
interval: 30s5.2 日志收集 #
# 使用 Fluentd 收集日志
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type json
</parse>
</source>六、常见问题 #
Q1: 如何在不同环境使用不同的数据库? #
答: 使用 Secret 管理不同环境的数据库连接:
# 开发环境 - SQLite
kubectl create secret generic myapp-secret -n dev \
--from-literal=DATABASE_URL="sqlite:///app.db"
# 生产环境 - MySQL 集群
kubectl create secret generic myapp-secret -n prod \
--from-literal=DATABASE_URL="mysql://user:pass@prod-mysql:3306/myapp"Q2: 如何快速回滚? #
答: 使用 Kubernetes 的回滚命令:
# 查看历史版本
kubectl rollout history deployment/prod-myapp -n prod
# 回滚到上一个版本
kubectl rollout undo deployment/prod-myapp -n prod
# 回滚到指定版本
kubectl rollout undo deployment/prod-myapp -n prod --to-revision=2Q3: 如何灰度发布? #
答: 使用 Istio 进行流量管理:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- myapp.example.com
http:
- route:
- destination:
host: myapp
subset: v1
weight: 90
- destination:
host: myapp
subset: v2
weight: 10总结 #
多环境部署的核心原则:
- 配置分离 - 代码与环境配置完全解耦
- 渐进式部署 - dev → test → staging → prod
- 自动化 - CI/CD 流水线自动部署
- 可回滚 - 快速回滚机制
- 可观测 - 完善的监控和日志
关键收获:
- ✅ Docker Compose 适合单机和开发环境
- ✅ Kubernetes 适合生产环境和高可用需求
- ✅ Kustomize 管理环境差异化配置
- ✅ CI/CD 实现自动化部署
- ✅ Secret 管理敏感信息
相关资源:
下一篇: 《Kubernetes 生产环境调优实战》