Redis Configuration Guide for GCP Deployment
Overview
This guide explains how to configure Redis for the FlowPOS backend application in staging and production environments on Google Cloud Platform (GCP).
Current State
The backend application uses Redis with BullMQ for job queue processing, specifically in the PDF module for preview generation tasks. The application expects the following environment variables:
REDIS_HOST- Redis server hostname (defaults to "localhost")REDIS_PORT- Redis server port (defaults to "6379")REDIS_PASSWORD- Redis authentication password (optional)
Current Issue: The GCP deployment workflows do not configure or provide Redis connectivity.
Implementation Options
Option A: Cloud Memorystore (Recommended for Production)
Pros:
- Fully managed Redis service by Google Cloud
- High availability and automatic failover (Standard tier)
- Built-in monitoring and logging
- Automatic backups and point-in-time recovery
- Better performance and reliability for production
Cons:
- Higher cost compared to self-managed options
- Requires VPC connector (already configured:
cloudrun-vpc-connector)
Setup Commands:
# Staging Environment (Basic tier - single node)
gcloud redis instances create flowpos-redis-staging \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0 \
--tier=basic \
--network=default \
--project=YOUR_PROJECT_ID
# Production Environment (Standard tier - with replica)
gcloud redis instances create flowpos-redis-production \
--size=2 \
--region=us-central1 \
--redis-version=redis_7_0 \
--tier=standard \
--replica-count=1 \
--network=default \
--project=YOUR_PROJECT_ID
# Get connection details
gcloud redis instances describe flowpos-redis-staging --region=us-central1
gcloud redis instances describe flowpos-redis-production --region=us-central1
Option B: Cloud Run Redis Container
Pros:
- More cost-effective for development/staging
- Easier to set up
- No VPC requirements
Cons:
- Not suitable for production (no persistence guarantees)
- Cloud Run containers are stateless
- Data loss on container restart
Setup:
Create a deploy/gcp/redis.Dockerfile:
FROM redis:7-alpine
EXPOSE 6379
CMD ["redis-server", "--requirepass", "${REDIS_PASSWORD}"]
Deploy to Cloud Run:
gcloud run deploy flowpos-redis-staging \
--image=redis:7-alpine \
--region=us-central1 \
--platform=managed \
--memory=1Gi \
--allow-unauthenticated \
--port=6379
Required Configuration Changes
1. Update Backend Dockerfile
File: deploy/gcp/backend.Dockerfile
Add Redis arguments to the ARG section (after line 9):
ARG REDIS_HOST=localhost
ARG REDIS_PORT=6379
ARG REDIS_PASSWORD=
Add to the builder stage environment variables (after line 19):
ARG REDIS_HOST
ARG REDIS_PORT
ARG REDIS_PASSWORD
Add to the ENV section (after line 96):
ENV REDIS_HOST=$REDIS_HOST
ENV REDIS_PORT=$REDIS_PORT
ENV REDIS_PASSWORD=$REDIS_PASSWORD
2. Update GitHub Repository Secrets
Add the following secrets in your GitHub repository under Settings → Secrets and variables → Actions:
Staging Environment Secrets
REDIS_HOST_STAGING- Example:10.0.0.3(Cloud Memorystore IP)REDIS_PORT_STAGING- Example:6379REDIS_PASSWORD_STAGING- Leave empty if no auth (Cloud Memorystore Basic tier doesn't use passwords)
Production Environment Secrets
REDIS_HOST_PRODUCTION- Example:10.0.0.4(Cloud Memorystore IP)REDIS_PORT_PRODUCTION- Example:6379REDIS_PASSWORD_PRODUCTION- Leave empty if no auth
3. Update Staging Deployment Workflow
File: .github/workflows/deploy-staging.yml
In the "Build and push backend Docker image" step (around line 164), add these build args:
- name: Build and push backend Docker image
run: |
IMAGE="$REGION-docker.pkg.dev/$PROJECT_ID/flowpos-backend/backend:${{ github.sha }}"
docker build \
-f deploy/gcp/backend.Dockerfile \
-t $IMAGE \
--build-arg NODE_VERSION=22-alpine \
--build-arg OS_FAMILY=alpine \
--build-arg FIREBASE_CLIENT_EMAIL=${{ secrets.FIREBASE_CLIENT_EMAIL }} \
--build-arg FIREBASE_PRIVATE_KEY=${{ secrets.FIREBASE_PRIVATE_KEY }} \
--build-arg FIREBASE_PROJECT_ID=${{ secrets.FIREBASE_PROJECT_ID }} \
--build-arg ENCRYPTION_KEY=${{ secrets.ENCRYPTION_KEY }} \
--build-arg SENDGRID_API_KEY=${{ secrets.SENDGRID_API_KEY }} \
--build-arg SENDGRID_FROM_EMAIL=${{ secrets.SENDGRID_FROM_EMAIL }} \
--build-arg REDIS_HOST=${{ secrets.REDIS_HOST_STAGING }} \
--build-arg REDIS_PORT=${{ secrets.REDIS_PORT_STAGING }} \
--build-arg REDIS_PASSWORD=${{ secrets.REDIS_PASSWORD_STAGING }} \
.
docker push $IMAGE
In the "Deploy backend to Cloud Run" step (around line 181), update the environment variables:
- name: Deploy backend to Cloud Run
run: |
gcloud run deploy flowpos-backend \
--image="$REGION-docker.pkg.dev/$PROJECT_ID/flowpos-backend/backend:${{ github.sha }}" \
--region=$REGION \
--platform=managed \
--project=$PROJECT_ID \
--memory=2Gi \
--cpu=2 \
--timeout=600 \
--concurrency=1 \
--vpc-connector=cloudrun-vpc-connector \
--set-env-vars=DATABASE_URL=${{ secrets.DATABASE_URL }},REDIS_HOST=${{ secrets.REDIS_HOST_STAGING }},REDIS_PORT=${{ secrets.REDIS_PORT_STAGING }},REDIS_PASSWORD=${{ secrets.REDIS_PASSWORD_STAGING }} \
--allow-unauthenticated
4. Update Production Deployment Workflow
File: .github/workflows/deploy-production.yml
Apply the same changes as staging, but use production secrets:
In the "Build and push backend Docker image" step (around line 164):
--build-arg REDIS_HOST=${{ secrets.REDIS_HOST_PRODUCTION }} \
--build-arg REDIS_PORT=${{ secrets.REDIS_PORT_PRODUCTION }} \
--build-arg REDIS_PASSWORD=${{ secrets.REDIS_PASSWORD_PRODUCTION }} \
In the "Deploy backend to Cloud Run" step (around line 181):
--set-env-vars=DATABASE_URL=${{ secrets.DATABASE_URL }},REDIS_HOST=${{ secrets.REDIS_HOST_PRODUCTION }},REDIS_PORT=${{ secrets.REDIS_PORT_PRODUCTION }},REDIS_PASSWORD=${{ secrets.REDIS_PASSWORD_PRODUCTION }} \
5. Update Premerge Validation Workflow
File: .github/workflows/premerge-validate.yml
In the "Docker build (builder stage or full)" step (around line 77), add dummy Redis values:
- name: Docker build (builder stage or full)
run: |
docker build \
-f deploy/gcp/backend.Dockerfile \
--build-arg NODE_VERSION=22-alpine \
--build-arg OS_FAMILY=alpine \
--build-arg FIREBASE_CLIENT_EMAIL=dummy \
--build-arg FIREBASE_PRIVATE_KEY=dummy \
--build-arg FIREBASE_PROJECT_ID=dummy \
--build-arg ENCRYPTION_KEY=dummy \
--build-arg SENDGRID_API_KEY=dummy \
--build-arg SENDGRID_FROM_EMAIL=dummy \
--build-arg REDIS_HOST=localhost \
--build-arg REDIS_PORT=6379 \
--build-arg REDIS_PASSWORD=dummy \
-t dummy-backend:pr .
6. Update Cloud Run Configuration File
File: deploy/gcp/cloudrun-config.yaml
Add Redis environment variables to the backend section:
# Backend service configuration
backend:
service_name: flowpos-backend
region: us-central1
memory: 1Gi
cpu: 1
min_instances: 1
max_instances: 10
environment_variables:
NODE_ENV: "staging"
DATABASE_URL: "" # To be set during deployment
FIREBASE_PROJECT_ID: "" # To be set during deployment
FIREBASE_CLIENT_EMAIL: "" # To be set during deployment
FIREBASE_PRIVATE_KEY: "" # To be set during deployment
GOOGLE_MAPS_API_KEY: "" # To be set during deployment
ENCRYPTION_KEY: "" # To be set during deployment
SENDGRID_API_KEY: "" # To be set during deployment
SENDGRID_FROM_EMAIL: "" # To be set during deployment
REDIS_HOST: "" # To be set during deployment
REDIS_PORT: "6379"
REDIS_PASSWORD: "" # To be set during deployment
7. Update Docker Compose (Local Development)
File: docker-compose.yml
The Redis service is already configured for local development (lines 59-70). No changes needed.
Implementation Checklist
Phase 1: GCP Infrastructure Setup
- Create Cloud Memorystore instance for staging
- Create Cloud Memorystore instance for production
- Note down the internal IP addresses
- Verify VPC connector (
cloudrun-vpc-connector) exists and is in the same network - Test connectivity from Cloud Run to Memorystore
Phase 2: GitHub Configuration
- Add
REDIS_HOST_STAGINGsecret - Add
REDIS_PORT_STAGINGsecret (value:6379) - Add
REDIS_PASSWORD_STAGINGsecret (if applicable) - Add
REDIS_HOST_PRODUCTIONsecret - Add
REDIS_PORT_PRODUCTIONsecret (value:6379) - Add
REDIS_PASSWORD_PRODUCTIONsecret (if applicable)
Phase 3: Code Changes
- Update
deploy/gcp/backend.Dockerfile - Update
.github/workflows/deploy-staging.yml - Update
.github/workflows/deploy-production.yml - Update
.github/workflows/premerge-validate.yml - Update
deploy/gcp/cloudrun-config.yaml - Commit and push changes
Phase 4: Testing
- Test premerge validation workflow (PR to develop)
- Test staging deployment
- Verify backend logs show Redis connection
- Test PDF preview generation (uses Redis queue)
- Test production deployment
- Monitor for any Redis connection errors
Verification Steps
After deployment, verify Redis is working:
1. Check Cloud Run Logs
# Staging
gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=flowpos-backend AND resource.labels.location=us-central1 AND severity>=INFO" --limit=50 --format=json --project=YOUR_PROJECT_ID
# Look for messages indicating Redis connection
2. Test PDF Preview Generation
The PDF module uses Redis for queue processing. Test by:
- Log into the application
- Navigate to PDF templates
- Generate a preview
- Check that the job processes successfully
3. Monitor Cloud Memorystore
# Check instance status
gcloud redis instances describe flowpos-redis-staging --region=us-central1
# View metrics in Google Cloud Console
# Navigate to: Memorystore for Redis → [instance] → Monitoring
Troubleshooting
Backend Cannot Connect to Redis
Symptoms:
- Backend fails to start
- Errors like "ECONNREFUSED" or "Redis connection failed"
Solutions:
- Verify VPC connector is configured:
--vpc-connector=cloudrun-vpc-connector - Check Redis IP address is correct (internal IP, not external)
- Verify Redis instance is in the same VPC network
- Check Cloud Run service has VPC egress set to "Route all traffic through VPC"
Queue Jobs Not Processing
Symptoms:
- PDF preview generation hangs
- Jobs stuck in "waiting" state
Solutions:
- Check Redis is accessible from backend
- Verify BullMQ configuration in
pdf.module.ts - Check backend logs for Bull/queue errors
- Restart the backend service
Performance Issues
Symptoms:
- Slow queue processing
- High Redis latency
Solutions:
- Consider upgrading Redis instance size
- Check Redis memory usage (Cloud Console)
- Review queue job configuration (attempts, backoff)
- Consider enabling Redis persistence (Standard tier)
Cost Estimation
Cloud Memorystore Costs (as of 2025)
Staging (Basic tier, 1GB):
- ~$40-50/month
Production (Standard tier, 2GB with replica):
- ~$200-250/month
Note: Prices vary by region. Check current pricing at: https://cloud.google.com/memorystore/pricing
Security Considerations
- Network Isolation: Cloud Memorystore instances are only accessible within your VPC
- Authentication: Basic tier doesn't support AUTH. Standard tier supports Redis AUTH command
- Encryption: Data is encrypted at rest and in transit
- Access Control: Use IAM roles to control who can manage Redis instances
- Secrets Management: Store Redis passwords in GitHub Secrets, never in code
References
- Cloud Memorystore Documentation
- BullMQ Documentation
- NestJS Bull Integration
- Cloud Run VPC Connector
Need Help?
If you encounter issues:
- Check Cloud Run logs:
gcloud logging read ... - Check Redis logs in Cloud Console
- Review this document's troubleshooting section
- Consult the PDF module implementation:
apps/backend/src/pdf/pdf.module.ts