Deployment and Operations
Table of Contents
- Introduction
- Project Structure
- Core Components
- Architecture Overview
- Detailed Component Analysis
- Dependency Analysis
- Performance Considerations
- Troubleshooting Guide
- Conclusion
- Appendices
Introduction
This document provides comprehensive deployment and operations guidance for the bi-server module within the broader BI platform. It covers containerization, Kubernetes deployment, service definitions, orchestration, scaling, health checks, monitoring, logging, CI/CD integration, and operational procedures such as rolling updates and rollbacks. The content synthesizes patterns established across multiple services in the repository to present a consistent, repeatable operational model.
Project Structure
The bi-server module is a Kratos-based Go service that exposes HTTP and gRPC endpoints. It participates in a multi-service ecosystem where each service follows similar containerization and Kubernetes deployment patterns. The repository includes:
- A dedicated bi-server module with Kratos tooling and protocol generation targets
- Shared bi-common library referenced via replace directives
- Multiple Dockerfiles implementing multi-stage builds
- Kubernetes manifests for Deployments, Services, and CronJobs
- CI/CD scripts and documentation for automation and deployment validation
Diagram sources
- [bi-server Makefile]
- [bi-server go.mod]
- [bi-server go.sum]
- [bi-analysis Dockerfile]
- [bi-api-jushuitan Dockerfile]
- [bi-api-leke Dockerfile]
- [bi-basic Dockerfile]
- [bi-chat Dockerfile]
- [bi-cron Dockerfile]
- [bi-tenant Dockerfile]
- [bi-sys Dockerfile]
- [bi-analysis k8s deployment]
- [bi-analysis k8s service]
- [bi-api-jushuitan k8s deployment]
- [bi-api-jushuitan k8s service]
- [bi-api-leke k8s deployment]
- [bi-api-leke k8s service]
- [bi-basic k8s deployment]
- [bi-basic k8s service]
- [bi-chat k8s deployment]
- [bi-chat k8s service]
- [bi-cron k8s cronjob]
- [bi-tenant k8s deployment]
- [bi-tenant k8s service]
- [bi-plan-taoxi k8s deployment]
- [bi-plan-taoxi k8s service]
- [bi-sys k8s deployment]
- [bi-sys k8s service]
- [bi-template k8s deployment]
- [bi-template k8s service]
- [bi-notify k8s deployment]
- [bi-notify k8s service]
Section sources
Core Components
- bi-server module: Kratos-based Go service with protocol generation and build targets.
- Shared bi-common library: Reused across services via replace directives.
- Multi-stage Dockerfiles: Standardized across services for secure, minimal runtime images.
- Kubernetes manifests: Consistent Deployment, Service, and CronJob patterns for workloads.
- CI/CD scripts and documentation: Automation for building, packaging, and deploying services.
Key operational patterns:
- Protocol generation and OpenAPI docs via Makefile targets.
- Multi-stage Docker builds with builder stage and final runtime stage.
- Kubernetes Deployments with ConfigMap/Secret injection and imagePullSecrets.
- Services exposing HTTP and gRPC ports; production recommended to use ClusterIP + Ingress.
Section sources
- [bi-server Makefile]
- [bi-server go.mod]
- [bi-server go.sum]
- [bi-analysis Dockerfile]
- [bi-basic Dockerfile]
- [bi-api-jushuitan Dockerfile]
- [bi-api-leke Dockerfile]
- [bi-chat Dockerfile]
- [bi-cron Dockerfile]
- [bi-tenant Dockerfile]
- [bi-sys Dockerfile]
- [bi-analysis k8s deployment]
- [bi-analysis k8s service]
Architecture Overview
The deployment architecture follows a consistent pattern across services:
- Build-time: Multi-stage Dockerfile compiles binaries in a builder stage.
- Runtime: Minimal base image copies only necessary artifacts and configuration.
- Orchestration: Kubernetes Deployments manage replicas; Services expose ports; Secrets/ConfigMaps inject environment variables.
- Observability: Centralized monitoring and API gateway support via bi-intra.
Diagram sources
- [bi-analysis Dockerfile]
- [bi-sys Dockerfile]
- [bi-analysis k8s deployment]
- [bi-analysis k8s service]
- [bi-infra README]
Detailed Component Analysis
bi-server Containerization and Build
- Multi-stage build: Builder stage compiles the Go binary with trimpath and ldflags; final stage copies only runtime artifacts and sets timezone.
- Ports exposed: HTTP and gRPC ports are defined per service; adjust as needed for bi-server.
- Base image: Minimal Alpine-based runtime image; ensure private registry credentials are not leaked to final image.
Operational guidance:
- Use ARG for environment selection and pass APP_ENV to the CMD invocation.
- Keep configs under a versioned configs directory and copy into the final image.
- Set EXPOSE for both HTTP and gRPC ports.
Section sources
- [bi-server Dockerfile]
- [bi-analysis Dockerfile]
- [bi-basic Dockerfile]
- [bi-api-jushuitan Dockerfile]
- [bi-api-leke Dockerfile]
- [bi-chat Dockerfile]
- [bi-cron Dockerfile]
- [bi-tenant Dockerfile]
- [bi-sys Dockerfile]
Kubernetes Deployment and Service Definitions
- Deployment: Define replicas, resource requests/limits, imagePullSecrets, and environment variables injected via ConfigMap/Secret.
- Service: NodePort for development/testing; production should use ClusterIP with Ingress for external access.
- Rolling updates: Use RollingUpdate strategy with maxUnavailable and maxSurge tuned per workload.
- Probes: Add readiness/liveness probes aligned with HTTP/gRPC health endpoints.
Patterns across services:
- ConfigMap holds application YAMLs; Secret holds sensitive credentials.
- imagePullSecrets configured at the Pod level to pull from private registries.
- Services expose both HTTP and gRPC ports.
Section sources
- [bi-analysis k8s deployment]
- [bi-analysis k8s service]
- [bi-api-jushuitan k8s deployment]
- [bi-api-jushuitan k8s service]
- [bi-api-leke k8s deployment]
- [bi-api-leke k8s service]
- [bi-basic k8s deployment]
- [bi-basic k8s service]
- [bi-chat k8s deployment]
- [bi-chat k8s service]
- [bi-cron k8s cronjob]
- [bi-tenant k8s deployment]
- [bi-tenant k8s service]
- [bi-plan-taoxi k8s deployment]
- [bi-plan-taoxi k8s service]
- [bi-sys k8s deployment]
- [bi-sys k8s service]
- [bi-template k8s deployment]
- [bi-template k8s service]
- [bi-notify k8s deployment]
- [bi-notify k8s service]
Health Checks, Readiness, and Liveness
- Implement HTTP health endpoints for readiness/liveness.
- Configure Kubernetes probes pointing to the health endpoint.
- For gRPC services, consider a gRPC probe or a small wrapper health handler.
- Tune probe thresholds and timeouts based on service latency and cold-start characteristics.
[No sources needed since this section provides general guidance]
Scaling Strategies
- Horizontal Pod Autoscaler (HPA): Scale on CPU/memory or custom metrics.
- PodDisruptionBudget (PDB): Protect availability during voluntary maintenance.
- Resource limits: Set requests/limits per service; ensure QoS class alignment.
- Blue-green or canary: Use separate Deployments with label selectors and Service switching.
[No sources needed since this section provides general guidance]
Monitoring, Logging, and Observability
- Centralized logging: Ship container stdout/stderr to a log aggregator; annotate with service and environment.
- Metrics: Expose Prometheus metrics; configure scraping via ServiceMonitors or Prometheus Operator.
- Tracing: Enable OpenTelemetry auto-instrumentation or manual tracing spans.
- API Gateway: Use Apisix for routing, rate limiting, and observability; integrate with Grafana dashboards.
Section sources
- [bi-intra apisix services]
- [bi-intra apisix manager script]
- [bi-intra apisix prometheus rule]
- [bi-intra apisix gateway lb test]
- [bi-intra apisix gateway lb svc]
- [bi-intra apisix gateway lb endpoints]
- [bi-infra README]
Rolling Updates, Blue-Green, and Rollback Procedures
- Rolling Update: Adjust maxUnavailable/maxSurge; ensure new pods pass readiness before old ones terminate.
- Blue-Green: Deploy green (new) alongside blue (current), switch Service selector atomically, then terminate blue.
- Rollback: Use kubectl rollout undo or redeploy previous image tag; maintain immutable tags for safe rollbacks.
Section sources
CI/CD Pipeline Integration and Automated Testing
- Build: Multi-stage Docker builds; tag images with semantic versions and commit hashes.
- Test: Run unit/integration tests in CI; validate OpenAPI docs and protocol generation.
- Deploy: Use envsubst to inject environment-specific values; apply ConfigMap/Secrets first, then Services/Deployments.
- Validation: Post-deploy smoke tests against health endpoints and basic API calls.
Section sources
Dependency Analysis
The bi-server module depends on shared libraries and Kratos tooling. Replace directives ensure local development aligns with the monorepo structure.
Diagram sources
Section sources
Performance Considerations
- Binary optimization: Use trimpath and ldflags to reduce binary size and improve reproducibility.
- Base image: Prefer minimal Alpine or distroless images to minimize attack surface.
- Resource tuning: Start with conservative requests/limits; monitor CPU and memory; adjust based on load tests.
- Network: Use ClusterIP + Ingress for production; enable connection pooling and keep-alive where applicable.
- Caching: Leverage Redis/Memcached via bi-common cache clients; tune TTL and eviction policies.
[No sources needed since this section provides general guidance]
Troubleshooting Guide
Common deployment issues and resolutions:
- Image pull failures: Verify imagePullSecrets and registry credentials; confirm image tag matches Deployment.
- Port conflicts: Ensure HTTP/gRPC ports are unique across services; check Service and Pod port mappings.
- Config injection: Confirm ConfigMap/Secret keys match environment variable names; verify envFrom usage.
- Health check failures: Validate health endpoint responses; adjust probe initialDelaySeconds and timeoutSeconds.
- Rollout stuck: Check Pod events and logs; resolve crashing containers before proceeding with updates.
Section sources
Conclusion
The bi-server module follows a standardized deployment and operations model consistent with other services in the BI platform. By adhering to multi-stage Docker builds, centralized Kubernetes manifests, robust CI/CD automation, and strong observability practices, teams can achieve reliable, scalable, and maintainable deployments across environments.
Appendices
Kubernetes Manifests Reference
- ConfigMap: Holds application YAMLs; mount or inject via envFrom.
- Secret: Stores sensitive credentials; mount as files or inject as env vars.
- Deployment: Defines replicas, rolling update strategy, resource limits, and imagePullSecrets.
- Service: Exposes HTTP/gRPC ports; use NodePort for dev, ClusterIP + Ingress for prod.
Section sources
API Gateway and Monitoring
- Apisix: Provides routing, TLS termination, rate limiting, and observability; integrates with Prometheus and Grafana.
- Grafana: Dashboards for service metrics, latency, and error rates.
- Kafka: Event streaming for decoupled processing; configure consumers/producers via bi-common MQ clients.
Section sources