Skip to main content

Option B: Cloud Run Redis Container - Implementation Guide

Overview

This guide walks you through deploying Redis as a containerized service on Cloud Run. This option is best suited for staging/development environments where cost optimization is more important than data persistence.

⚠️ Important Warnings

  1. No Data Persistence: Cloud Run containers are ephemeral. When a container restarts, all Redis data is lost
  2. Unpredictable Restarts: Cloud Run may scale down to zero or restart containers, losing all queue data
  3. No High Availability: Single instance, no failover
  4. Limited Memory: Cloud Run has memory limits that may not scale well
  5. Job Loss Risk: Any jobs in the queue will be lost on restart

Good for Staging/Development Because

  1. Cost-Effective: Pay only when used, can scale to zero
  2. Easy Setup: No VPC configuration needed
  3. Quick Deployment: Deploy in minutes
  4. Simple Management: Standard Cloud Run interface

Architecture

┌─────────────────┐         ┌─────────────────┐
│ Backend │ │ Redis │
│ Cloud Run │◄────────┤ Cloud Run │
│ (Public) │ HTTP │ (Internal) │
└─────────────────┘ └─────────────────┘

Implementation Steps

Step 1: Build and Deploy Redis Container

We've created a Redis Dockerfile at deploy/gcp/redis.Dockerfile. Deploy it:

For Staging Environment

# Set your project ID
PROJECT_ID="your-gcp-project-id"
REGION="us-central1"

# Build the image
docker build -f deploy/gcp/redis.Dockerfile -t gcr.io/$PROJECT_ID/redis:staging .

# Push to Google Container Registry
docker push gcr.io/$PROJECT_ID/redis:staging

# Deploy to Cloud Run
gcloud run deploy flowpos-redis-staging \
--image=gcr.io/$PROJECT_ID/redis:staging \
--region=$REGION \
--platform=managed \
--memory=512Mi \
--cpu=1 \
--min-instances=0 \
--max-instances=1 \
--port=6379 \
--no-allow-unauthenticated \
--ingress=internal \
--project=$PROJECT_ID

# Get the internal URL
REDIS_URL=$(gcloud run services describe flowpos-redis-staging \
--region=$REGION \
--platform=managed \
--format='value(status.url)' \
--project=$PROJECT_ID)

echo "Redis URL: $REDIS_URL"

For Production (if you still want to use it)

# Deploy with more resources and keep at least 1 instance running
gcloud run deploy flowpos-redis-production \
--image=gcr.io/$PROJECT_ID/redis:production \
--region=$REGION \
--platform=managed \
--memory=1Gi \
--cpu=2 \
--min-instances=1 \
--max-instances=1 \
--port=6379 \
--no-allow-unauthenticated \
--ingress=internal \
--project=$PROJECT_ID

Step 2: Configure Backend to Use Cloud Run Redis

The challenge with Cloud Run Redis is that services communicate via HTTP/HTTPS URLs, but Redis uses TCP. We have two solutions:

Solution 2A: Use Cloud Run Jobs (Simpler but Limited)

Instead of a persistent Redis server, use in-memory queue processing only. Modify the backend to handle this:

Update apps/backend/src/pdf/pdf.module.ts:

BullModule.forRoot({
redis: {
host: process.env.REDIS_HOST || "localhost",
port: Number.parseInt(process.env.REDIS_PORT || "6379", 10),
password: process.env.REDIS_PASSWORD,
// Add retry strategy for Cloud Run
retryStrategy: (times) => {
if (times > 3) {
// Stop retrying after 3 attempts
return null;
}
// Exponential backoff
return Math.min(times * 1000, 5000);
},
// Increase timeouts for Cloud Run cold starts
connectTimeout: 10000,
lazyConnect: true,
enableReadyCheck: false,
},
}),

Solution 2B: Use Internal VPC (Better)

Enable VPC access for both services so they can communicate directly:

# Create VPC connector (if not exists)
gcloud compute networks vpc-access connectors create cloudrun-vpc-connector \
--region=$REGION \
--subnet-project=$PROJECT_ID \
--subnet=default \
--min-instances=2 \
--max-instances=10

# Redeploy Redis with VPC
gcloud run deploy flowpos-redis-staging \
--image=gcr.io/$PROJECT_ID/redis:staging \
--region=$REGION \
--platform=managed \
--memory=512Mi \
--cpu=1 \
--min-instances=1 \
--max-instances=1 \
--port=6379 \
--vpc-connector=cloudrun-vpc-connector \
--vpc-egress=private-ranges-only \
--project=$PROJECT_ID

# Get internal IP (this is the internal service URL)
REDIS_HOST=$(gcloud run services describe flowpos-redis-staging \
--region=$REGION \
--format='value(status.address.url)' | sed 's/https:\/\///' | sed 's/http:\/\///')

echo "Redis Host: $REDIS_HOST"

Step 3: Update GitHub Secrets

Add these secrets to your GitHub repository:

Staging

  • REDIS_HOST_STAGING: Internal hostname of Redis Cloud Run service (e.g., flowpos-redis-staging-xxxxx.a.run.app)
  • REDIS_PORT_STAGING: 6379
  • REDIS_PASSWORD_STAGING: Leave empty (no auth in this setup)

Production

  • REDIS_HOST_PRODUCTION: Internal hostname of Redis Cloud Run service
  • REDIS_PORT_PRODUCTION: 6379
  • REDIS_PASSWORD_PRODUCTION: Leave empty (no auth in this setup)

Step 4: Update Backend Deployment

Follow the same workflow updates as described in the main Redis guide:

  1. Update deploy/gcp/backend.Dockerfile with Redis ARGs
  2. Update .github/workflows/deploy-staging.yml
  3. Update .github/workflows/deploy-production.yml

IMPORTANT: Make sure backend also uses VPC connector:

--vpc-connector=cloudrun-vpc-connector \
--vpc-egress=private-ranges-only \

Automated Deployment Script

Create a helper script to deploy everything:

File: scripts/deploy-redis-cloudrun.sh

#!/bin/bash

set -e

ENVIRONMENT=${1:-staging}
PROJECT_ID=${2:-your-project-id}
REGION=${3:-us-central1}

echo "🚀 Deploying Redis to Cloud Run for $ENVIRONMENT environment"

# Build and push image
echo "📦 Building Docker image..."
docker build -f deploy/gcp/redis.Dockerfile -t gcr.io/$PROJECT_ID/redis:$ENVIRONMENT .

echo "⬆️ Pushing to GCR..."
docker push gcr.io/$PROJECT_ID/redis:$ENVIRONMENT

# Deploy to Cloud Run
echo "🎯 Deploying to Cloud Run..."
if [ "$ENVIRONMENT" = "production" ]; then
MIN_INSTANCES=1
MEMORY="1Gi"
CPU=2
else
MIN_INSTANCES=0
MEMORY="512Mi"
CPU=1
fi

gcloud run deploy flowpos-redis-$ENVIRONMENT \
--image=gcr.io/$PROJECT_ID/redis:$ENVIRONMENT \
--region=$REGION \
--platform=managed \
--memory=$MEMORY \
--cpu=$CPU \
--min-instances=$MIN_INSTANCES \
--max-instances=1 \
--port=6379 \
--vpc-connector=cloudrun-vpc-connector \
--vpc-egress=private-ranges-only \
--no-allow-unauthenticated \
--ingress=internal \
--project=$PROJECT_ID

# Get the service URL
REDIS_URL=$(gcloud run services describe flowpos-redis-$ENVIRONMENT \
--region=$REGION \
--platform=managed \
--format='value(status.url)' \
--project=$PROJECT_ID)

echo "✅ Redis deployed successfully!"
echo "📝 Service URL: $REDIS_URL"
echo ""
echo "🔑 Add these to your GitHub Secrets:"
echo " REDIS_HOST_${ENVIRONMENT^^}: ${REDIS_URL#https://}"
echo " REDIS_PORT_${ENVIRONMENT^^}: 6379"
echo " REDIS_PASSWORD_${ENVIRONMENT^^}: (leave empty)"

Make it executable:

chmod +x scripts/deploy-redis-cloudrun.sh

Run it:

# Deploy staging
./scripts/deploy-redis-cloudrun.sh staging your-project-id us-central1

# Deploy production
./scripts/deploy-redis-cloudrun.sh production your-project-id us-central1

Monitoring and Troubleshooting

Check Redis Service Status

gcloud run services describe flowpos-redis-staging \
--region=us-central1 \
--platform=managed

View Redis Logs

gcloud logging read "resource.type=cloud_run_revision AND resource.labels.service_name=flowpos-redis-staging" \
--limit=50 \
--format=json

Test Redis Connection from Backend

Add a health check endpoint in your backend:

// apps/backend/src/health/health.controller.ts
import { Controller, Get } from '@nestjs/common';
import { InjectQueue } from '@nestjs/bull';
import { Queue } from 'bull';

@Controller('health')
export class HealthController {
constructor(
@InjectQueue('preview-generation') private previewQueue: Queue,
) {}

@Get('redis')
async checkRedis() {
try {
const client = await this.previewQueue.client;
await client.ping();
return { status: 'ok', redis: 'connected' };
} catch (error) {
return { status: 'error', redis: 'disconnected', error: error.message };
}
}
}

Test it:

curl https://your-backend-url/health/redis

Common Issues

1. "Connection Refused"

Cause: Services can't communicate

Solution: Ensure both services use the same VPC connector:

gcloud run services update flowpos-backend \
--vpc-connector=cloudrun-vpc-connector \
--vpc-egress=private-ranges-only

2. "Connection Timeout"

Cause: Cold start or networking issue

Solution:

  • Set min-instances=1 for Redis to avoid cold starts
  • Increase connection timeout in backend Redis config

3. "All Jobs Lost"

Cause: Redis container restarted

Solution:

  • This is expected behavior with Cloud Run
  • Consider switching to Cloud Memorystore for production
  • Implement job persistence/retry logic in application

Cost Comparison

Cloud Run Redis (Estimated Monthly Cost)

Staging (512Mi, 0.5 CPU, min 0 instances):

  • With 50% uptime: ~$5-10/month
  • Always on (min 1 instance): ~$15-20/month

Production (1Gi, 1 CPU, min 1 instance always on):

  • ~$30-40/month

vs. Cloud Memorystore

FeatureCloud Run RedisCloud Memorystore
Cost (Staging)$5-20/month$40-50/month
Cost (Production)$30-40/month$200-250/month
Data Persistence❌ No✅ Yes
High Availability❌ No✅ Yes (Standard)
Setup Complexity⭐⭐ Easy⭐⭐⭐ Medium
Scalability⭐⭐ Limited⭐⭐⭐⭐ Excellent

Recommendations

Use Cloud Run Redis For

  • ✅ Development environments
  • ✅ Staging environments
  • ✅ Testing and POC
  • ✅ Low-traffic applications
  • ✅ Non-critical background jobs

Use Cloud Memorystore For

  • ✅ Production environments
  • ✅ Business-critical applications
  • ✅ High-traffic applications
  • ✅ Applications requiring data persistence
  • ✅ Applications with complex queuing needs

Migration Path

Start with Cloud Run, Migrate to Memorystore Later

  1. Phase 1: Deploy Cloud Run Redis for staging (cost-effective testing)
  2. Phase 2: Test your application thoroughly
  3. Phase 3: Deploy Cloud Memorystore for production
  4. Phase 4: If satisfied, optionally migrate staging to Memorystore

This approach lets you validate the setup cheaply before committing to the more expensive production solution.

Next Steps

  1. ✅ Review the Redis Dockerfile created at deploy/gcp/redis.Dockerfile
  2. ✅ Run the deployment script or deploy manually
  3. ✅ Configure GitHub secrets with the Redis connection details
  4. ✅ Update backend deployment workflows
  5. ✅ Test the integration thoroughly
  6. ✅ Monitor for any job losses or connection issues
  7. ✅ Plan migration to Cloud Memorystore for production if needed

Support

If you encounter issues specific to Cloud Run Redis deployment, check:

  • Cloud Run service logs
  • VPC connector configuration
  • Backend Redis connection settings
  • Network connectivity between services

Perfect — you already have a working redis.Dockerfile. Let’s go step by step to deploy Redis to Google Cloud Run using it.


⚙️ 1. Understand the Setup

✅ Your deploy/gcp/redis.Dockerfile builds a stateless Redis image — ideal for development/staging, not production (because Cloud Run containers are ephemeral and have no persistent disk).

Alternatives for production:

  • Use Cloud Memorystore (Redis) → managed, persistent, scalable.
  • Use Cloud Run Redis only for temporary caches, queues, or CI preview environments.

🚀 2. Build and Push the Image to Artifact Registry

If you don’t have an Artifact Registry yet:

gcloud artifacts repositories create flowpos-repo \
--repository-format=docker \
--location=us-central1 \
--description="FlowPOS container repository"

Then, from your project root:

# Authenticate Docker with Artifact Registry
gcloud auth configure-docker us-central1-docker.pkg.dev

# Build the image
docker build -t us-central1-docker.pkg.dev/PROJECT_ID/flowpos-repo/redis:latest -f deploy/gcp/redis.Dockerfile .

# Push it
docker push us-central1-docker.pkg.dev/PROJECT_ID/flowpos-repo/redis:latest

Replace PROJECT_ID with your actual GCP project ID.


☁️ 3. Deploy to Cloud Run

Redis uses port 6379, so you must tell Cloud Run to listen on that port and allow unauthenticated connections only if you want external access.

gcloud run deploy redis-cache \
--image us-central1-docker.pkg.dev/PROJECT_ID/flowpos-repo/redis:latest \
--platform managed \
--region us-central1 \
--memory 512Mi \
--cpu 1 \
--port 6379 \
--allow-unauthenticated \
--max-instances 1 \
--timeout 300

✅ Notes:

  • --max-instances 1 ensures you always hit the same container (important for stateful cache-like use).
  • Remove --allow-unauthenticated if you only want internal access from other Cloud Run services.

If only your backend should connect to Redis:

gcloud run services update redis-cache \
--no-allow-unauthenticated \
--ingress internal-and-cloud-load-balancing

Then connect from another Cloud Run service in the same VPC connector using:

REDIS_HOST=redis-cache-xxxxxx-uc.a.run.app
REDIS_PORT=6379

or, if using VPC:

REDIS_HOST=redis-cache

(you’ll need a Serverless VPC Connector and set --vpc-connector on both services)


🧠 5. Verify It’s Running

gcloud run services describe redis-cache --region us-central1

You should see:

  • URL (for internal connection)
  • Status: Active
  • Logs in Cloud Logging

You can test connectivity from your backend container:

redis-cli -h redis-cache-xxxxxx-uc.a.run.app -p 6379 ping

💡 Tip: For Local Dev

You can run it locally with the same image:

docker run -p 6379:6379 us-central1-docker.pkg.dev/PROJECT_ID/flowpos-repo/redis:latest

Would you like me to show the GitHub Actions workflow YAML to automatically build and deploy this redis.Dockerfile to Cloud Run on merge to develop or staging?

I can’t directly open your Google Cloud Logs URL — it’s private to your project. However, I can guide you step-by-step to fix the issue.


🔍 Why it’s still failing

Even after setting --port 6379, Cloud Run injects its own PORT environment variable (typically 8080) and expects your container to listen on that port. Your Redis image ignores that and still starts on 6379, so Cloud Run health checks fail.


✅ Fix: Let Redis listen on $PORT

Update your deploy/gcp/redis.Dockerfile to use the PORT env var dynamically:

FROM redis:7-alpine

# Use the PORT env var that Cloud Run injects (defaults to 6379 if missing)
ENV PORT=6379

# Healthcheck for Cloud Run
HEALTHCHECK --interval=10s --timeout=5s --start-period=15s --retries=5 \
CMD redis-cli -p ${PORT} ping || exit 1

EXPOSE ${PORT}

# Run Redis on the same port as Cloud Run expects
CMD ["sh", "-c", "redis-server --port ${PORT} \
--appendonly no \
--maxmemory 256mb \
--maxmemory-policy allkeys-lru \
--tcp-backlog 511 \
--timeout 0 \
--tcp-keepalive 300"]

Then rebuild and redeploy:

docker build -t us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest -f deploy/gcp/redis.Dockerfile .
docker push us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest

gcloud run deploy flowpos-redis \
--image us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest \
--region us-central1 \
--platform managed \
--memory 512Mi \
--cpu 1 \
--timeout 600 \
--max-instances 1 \
--allow-unauthenticated

🧠 Why this works

  • Cloud Run injects PORT=8080 by default.
  • The new CMD runs Redis on that same port, satisfying health checks.
  • You can override it manually (--set-env-vars PORT=6379) if needed.

⚠️ Important reminder

This setup is non-persistent and single-instance — good for staging or cache testing only. For production, use Cloud Memorystore (Redis) instead.

Would you like me to show how to connect your backend Cloud Run service to this Redis container (internal access only, no public exposure)?

The logs you uploaded make the root cause crystal clear:

terminated: Application failed to start: failed to load /usr/local/bin/docker-entrypoint.sh: exec format error

🧩 Root Cause

That error means Cloud Run tried to execute the redis entrypoint inside your image, but the binary is not compatible with Cloud Run’s CPU architecture. Cloud Run runs on x86-64 (amd64), but your local Docker (on a Mac with Apple Silicon / M-series chip) built an ARM64 image.

When you pushed that ARM image, Cloud Run pulled it, tried to run it on amd64 hardware → ❌ “exec format error.”


✅ Fix: Rebuild the image for amd64

1️⃣ Build explicitly for amd64

docker buildx build \
--platform linux/amd64 \
-t us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest \
-f deploy/gcp/redis.Dockerfile \
--push .

This ensures your image is compiled for the correct CPU architecture.


2️⃣ Redeploy cleanly

After the push completes:

gcloud run deploy flowpos-redis \
--image us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest \
--region us-central1 \
--platform managed \
--memory 512Mi \
--cpu 1 \
--timeout 600 \
--max-instances 1 \
--allow-unauthenticated

🧠 Why this works

PlatformArchitectureWhat happens
Local build (Mac M1/M2/M3)arm64Runs locally fine, fails on Cloud Run
Cloud Runamd64Needs an amd64 image
Fix--platform linux/amd64Forces x86 build so Cloud Run can execute it

🧩 Optional: Make Docker use multi-arch by default

If you build images for both local and Cloud Run often, set up a multi-platform builder:

docker buildx create --name multiarch --use
docker buildx inspect --bootstrap

Then build with:

docker buildx build --platform linux/amd64,linux/arm64 ...

Once you rebuild with --platform linux/amd64, the exec format error will disappear and Redis will start normally on Cloud Run.

Great question — you’re now moving into the right (secure) setup phase: storing Redis connection details in GCP Secret Manager and using them in Cloud Run.

Let’s go over what secrets you should create, what values they need, and how to inject them safely.


🧩 1️⃣ Determine your Redis setup

You have two possible Redis setups, and the secrets differ slightly:

SetupDescriptionRecommended For
🟢 Cloud MemorystoreManaged Redis by GCP (no public IP, internal-only)Production / Staging
🟡 Cloud Run Redis ContainerYour custom Redis Docker image running on Cloud RunLocal / Dev-only

⚙️ 2️⃣ Secrets to Create (same naming works for both setups)

Secret NameExample ValueDescription
REDIS_HOST10.8.0.3 (Cloud Memorystore internal IP)
or flowpos-redis-xxxxx-uc.a.run.app (Cloud Run Redis URL)
Hostname or IP of your Redis server
REDIS_PORT6379TCP port (always 6379 unless you changed it)
REDIS_PASSWORD(optional, if password-protected)Authentication password for Redis
REDIS_TLS_ENABLEDfalse or trueWhether the Redis endpoint uses TLS (Memorystore defaults to false)
REDIS_URLredis://:<PASSWORD>@<HOST>:<PORT>Full connection URI (optional convenience secret)

🧠 Example: Cloud Memorystore secrets

Let’s say you created your Redis instance like this:

gcloud redis instances create flowpos-redis \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0

Get its internal IP:

gcloud redis instances describe flowpos-redis --region=us-central1 \
--format="value(host)"

Then create the secrets:

echo "10.8.0.3" | gcloud secrets create REDIS_HOST --data-file=-
echo "6379" | gcloud secrets create REDIS_PORT --data-file=-
echo "false" | gcloud secrets create REDIS_TLS_ENABLED --data-file=-

🧪 Example: Cloud Run Redis (dev-only)

If your Redis is on Cloud Run and reachable via HTTPS (for testing):

echo "flowpos-redis-723334209984.us-central1.run.app" | gcloud secrets create REDIS_HOST --data-file=-
echo "8080" | gcloud secrets create REDIS_PORT --data-file=-
echo "false" | gcloud secrets create REDIS_TLS_ENABLED --data-file=-

🔐 3️⃣ Inject secrets into your backend Cloud Run service

Attach secrets to your backend (e.g. flowpos-backend) with environment variables:

gcloud run services update flowpos-backend \
--region us-central1 \
--update-secrets \
REDIS_HOST=REDIS_HOST:latest, \
REDIS_PORT=REDIS_PORT:latest, \
REDIS_TLS_ENABLED=REDIS_TLS_ENABLED:latest

✅ Cloud Run will:

  • Automatically mount them as env vars
  • Rotate them if the secret changes

🧠 4️⃣ Use the secrets in your code

In Node.js, NestJS, or similar backends:

const redisConfig = {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || '6379', 10),
tls: process.env.REDIS_TLS_ENABLED === 'true' ? {} : undefined,
};

or if using ioredis:

import Redis from "ioredis";

const client = new Redis({
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT),
password: process.env.REDIS_PASSWORD,
tls: process.env.REDIS_TLS_ENABLED === 'true' ? {} : undefined,
});

🧱 5️⃣ Optional: full URI secret

If you prefer one secret instead of multiple:

echo "redis://:mypassword@10.8.0.3:6379" | gcloud secrets create REDIS_URL --data-file=-

Then in your code:

const client = new Redis(process.env.REDIS_URL);

✅ Summary

SecretExample ValuePurpose
REDIS_HOST10.8.0.3IP or hostname
REDIS_PORT6379Redis port
REDIS_PASSWORD(optional)Redis auth
REDIS_TLS_ENABLEDfalseWhether to use TLS
REDIS_URLredis://:<password>@10.8.0.3:6379Combined URI (optional)

Would you like me to generate the full gcloud command sequence (Memorystore + VPC Connector + Secrets + backend redeploy) so you can run it copy-paste?

Perfect — this confirms that Redis never actually finishes the Cloud Run startup probe. Your logs earlier already showed that Redis does start and listens on TCP 8080, but it doesn’t speak HTTP, so Cloud Run’s health checker sends an HTTP GET / → Redis treats it as an attack → exits → startup fails again.

In short:

Redis starts ✅
Cloud Run sends HTTP probe ❌ (Redis sees invalid command)
Container exits ❌

🧩 Why this happens

Cloud Run can only route and probe HTTP or gRPC containers. Redis is a raw TCP service — it will never satisfy the HTTP readiness probe, so the service will loop between “starting” and “terminated”.


✅ Fix / Work-arounds

OptionDescriptionProsCons
🟢 Use Cloud Memorystore for RedisManaged Redis instance reachable over internal VPCnative Redis, persistent, no HTTP layer issuescosts a bit more
🟡 Run Redis on Compute Engine VM or GKEFull control, standard Redis port 6379flexibleops overhead
🧪 Keep Cloud Run only for testing, wrap it with an HTTP proxyAdd small Express/Go proxy that answers / for Cloud Run and forwards TCP to local Redislets Cloud Run pass health checkextra code, not production-grade

# create Redis instance
gcloud redis instances create flowpos-redis \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0

# get instaces list
gcloud redis instances list --region=us-central1 --project=barto-dev

# get internal IP
gcloud redis instances describe flowpos-redis \
--region=us-central1 --format="value(host)"

# Port (confirm)
gcloud redis instances describe flowpos-redis \
--region us-central1 \
--format='value(port)'
# expected: 6379

# create serverless VPC connector
gcloud compute networks vpc-access connectors create flowpos-vpc \
--region=us-central1 --network default --range 10.9.0.0/28

# Run this command to list all connectors in your project and region:
gcloud compute networks vpc-access connectors list \
--region=us-central1

# If you see flowpos-vpc or another connector that failed partially, delete it:
gcloud compute networks vpc-access connectors delete flowpos-vpc \
--region=us-central1

# connect backend Cloud Run service
gcloud run services update flowpos-backend \
--region us-central1 \
--vpc-connector flowpos-vpc \
--vpc-egress all-traffic \
--set-env-vars REDIS_HOST=10.117.86.3,REDIS_PORT=6379

# Verify connectivity
# Once deployed, run inside your backend container:
nc -zv 10.117.86.3 6379

Then store these values securely with Secret Manager:

echo "10.8.0.3" | gcloud secrets create REDIS_HOST --data-file=-
echo "6379" | gcloud secrets create REDIS_PORT --data-file=-

and update your backend:

gcloud run services update flowpos-backend \
--region us-central1 \
--update-secrets REDIS_HOST=REDIS_HOST:latest,REDIS_PORT=REDIS_PORT:latest

🚫 TL;DR

You cannot successfully run bare Redis on Cloud Run because it fails HTTP health checks. ✅ Use Cloud Memorystore for Redis (preferred) or GCE/GKE for persistent TCP Redis. If you only need a demo, wrap Redis in a tiny HTTP proxy container.

✅ Yes — that exact command creates a Cloud Memorystore for Redis instance (i.e., a managed Redis service in GCP).

Let’s break it down so you’re 100% clear:


🧠 What the command does

gcloud redis instances create flowpos-redis \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0

This creates a Cloud Memorystore (Redis) instance named flowpos-redis in your project barto-dev.

Under the hood:

  • It provisions a managed Redis VM inside Google’s private network.
  • It gives you an internal IP address (e.g. 10.8.0.3) — not public.
  • It’s not accessible via HTTP or Cloud Run URL — only via VPC (private network).
  • It handles all patching, monitoring, and persistence automatically.

⚙️ What happens next

  1. Get its IP address:

    gcloud redis instances describe flowpos-redis \
    --region=us-central1 \
    --format="value(host)"

    Example output:

    10.8.0.3
  2. Connect from your Cloud Run backend (e.g., flowpos-backend) using a Serverless VPC Connector:

    gcloud compute networks vpc-access connectors create flowpos-vpc \
    --region=us-central1 \
    --network default \
    --range 10.8.0.0/28
  3. Attach the connector to your backend Cloud Run service:

    gcloud run services update flowpos-backend \
    --region us-central1 \
    --vpc-connector flowpos-vpc \
    --set-env-vars REDIS_HOST=10.8.0.3,REDIS_PORT=6379
  4. Connect in code:

    import Redis from "ioredis";

    const client = new Redis({
    host: process.env.REDIS_HOST,
    port: Number(process.env.REDIS_PORT),
    });

🏷️ Naming note

Even though you called it flowpos-redis, it’s still Cloud Memorystore — the name is just a label. If you prefer semantic clarity, you can delete and recreate as flowpos-cache:

gcloud redis instances delete flowpos-redis --region=us-central1

then:

gcloud redis instances create flowpos-cache \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0

In summary:

Yes — gcloud redis instances create ... provisions a Cloud Memorystore for Redis (a fully managed, persistent, VPC-only Redis). It’s the right approach for production and staging use cases.

Got it — here are safe, concrete steps (with commands) to destroy the resources you likely created for the Redis setup. I’ll include checks so you don’t accidentally remove the wrong thing. Run the commands in your terminal (replace PROJECT_ID / names if different).

⚠️ Warning: these are destructive operations (irreversible). Double-check names before running. You must have the right IAM permissions (roles/owner or specific delete permissions for Redis, run, compute, artifacts, secretmanager).


1) Delete the Memorystore Redis instance

This removes the managed Redis instance (Cloud Memorystore).

Check instance(s):

gcloud redis instances list --region=us-central1 --project=barto-dev

Delete (example name flowpos-redis or flowpos-cache):

gcloud redis instances delete flowpos-redis --region=us-central1 --project=barto-dev
# or, if you used flowpos-cache:
# gcloud redis instances delete flowpos-cache --region=us-central1 --project=barto-dev

Confirm when prompted (or add --quiet to skip prompt).

Check status:

gcloud redis instances describe flowpos-redis --region=us-central1 --project=barto-dev || echo "Instance not found"

2) Delete the Serverless VPC Connector (if you created one)

List connectors:

gcloud compute networks vpc-access connectors list --region=us-central1 --project=barto-dev

Delete connector (example flowpos-vpc):

gcloud compute networks vpc-access connectors delete flowpos-vpc --region=us-central1 --project=barto-dev

Note: A connector cannot be deleted while in use; ensure no Cloud Run services reference it. You can remove the connector from a Cloud Run service first:

gcloud run services update flowpos-backend --region=us-central1 --remove-vpc-connector --project=barto-dev

3) (Optional) Delete the Cloud Run service you used for Redis

If you created a Cloud Run service flowpos-redis (the containerized Redis), remove it:

List:

gcloud run services list --region=us-central1 --project=barto-dev

Delete:

gcloud run services delete flowpos-redis --region=us-central1 --project=barto-dev

4) Delete the Artifact Registry image and/or repository

Remove the pushed image tag (this frees storage and avoids confusion):

Delete specific image tag:

gcloud artifacts docker images delete \
us-central1-docker.pkg.dev/barto-dev/flowpos-redis/redis:latest \
--project=barto-dev --delete-tags --quiet

If you want to delete the entire repository:

gcloud artifacts repositories delete flowpos-redis --location=us-central1 --project=barto-dev

5) Delete Secret Manager secrets (if you created any)

List secrets:

gcloud secrets list --project=barto-dev

Delete a secret (example REDIS_HOST):

gcloud secrets delete REDIS_HOST --project=barto-dev
gcloud secrets delete REDIS_PORT --project=barto-dev
gcloud secrets delete REDIS_URL --project=barto-dev
# ...repeat for any other secrets you created

6) (Optional) Remove any IAM/service account bindings you created

If you granted extra roles to service accounts for VPC or Redis, remove them. Example to remove a role binding:

gcloud projects remove-iam-policy-binding barto-dev \
--member="serviceAccount:YOUR_SA_EMAIL" \
--role="roles/redis.editor"

(Adjust YOUR_SA_EMAIL and role accordingly.)


7) Final sanity checks

  • Confirm no Redis instances:
gcloud redis instances list --region=us-central1 --project=barto-dev
  • Confirm no connector:
gcloud compute networks vpc-access connectors list --region=us-central1 --project=barto-dev
  • Confirm Cloud Run services:
gcloud run services list --region=us-central1 --project=barto-dev
  • Confirm secrets:
gcloud secrets list --project=barto-dev

8) If you prefer to remove everything with a single script

Here’s a safe copy/paste script — review and edit names before running:

#!/usr/bin/env bash
set -euo pipefail

PROJECT="barto-dev"
REGION="us-central1"
REDIS_NAME="flowpos-redis" # change to flowpos-cache if you used that
VPC_CONNECTOR="flowpos-vpc"
CLOUDRUN_SERVICE="flowpos-redis"
ARTIFACT_IMAGE="us-central1-docker.pkg.dev/${PROJECT}/flowpos-redis/redis:latest"

echo "Deleting Redis instance ${REDIS_NAME}..."
gcloud redis instances delete "$REDIS_NAME" --region="$REGION" --project="$PROJECT" --quiet || true

echo "Deleting Cloud Run service ${CLOUDRUN_SERVICE}..."
gcloud run services delete "$CLOUDRUN_SERVICE" --region="$REGION" --project="$PROJECT" --quiet || true

echo "Deleting VPC connector ${VPC_CONNECTOR}..."
gcloud compute networks vpc-access connectors delete "$VPC_CONNECTOR" --region="$REGION" --project="$PROJECT" --quiet || true

echo "Deleting artifact image ${ARTIFACT_IMAGE}..."
gcloud artifacts docker images delete "$ARTIFACT_IMAGE" --project="$PROJECT" --delete-tags --quiet || true

echo "Deleting artifact repository flowpos-redis..."
gcloud artifacts repositories delete flowpos-redis --location="$REGION" --project="$PROJECT" --quiet || true

echo "Deleting secrets (REDIS_HOST, REDIS_PORT, REDIS_URL) if present..."
for s in REDIS_HOST REDIS_PORT REDIS_URL REDIS_PASSWORD REDIS_TLS_ENABLED; do
gcloud secrets delete "$s" --project="$PROJECT" --quiet || true
done

echo "Done. Verify with gcloud redis instances list and gcloud run services list."

Run after verifying names:

chmod +x destroy-redis-resources.sh
./destroy-redis-resources.sh

If you tell me which exact resource name(s) you created (e.g. flowpos-redis vs flowpos-cache, VPC connector name), I can paste a one-line command or small script tailored to your environment.

Perfect 👍 — since you’re working on staging and need to use Cloud Run, here’s exactly how you can run Redis properly there while keeping things stable and Cloud-Run-compatible.


✅ 1. Key Concept

Cloud Run only supports HTTP traffic, but Redis is pure TCP. So you cannot run “bare” Redis in Cloud Run production mode — it will always fail the health check. However, in staging, there are two working approaches:


This gives you a real Redis backend, with no need to run a container.

# Create managed Redis (Cloud Memorystore)
gcloud redis instances create flowpos-redis \
--size=1 \
--region=us-central1 \
--redis-version=redis_7_0

Then:

# Get internal IP
gcloud redis instances describe flowpos-redis \
--region=us-central1 --format="value(host)"

Create a Serverless VPC Connector:

gcloud compute networks vpc-access connectors create flowpos-staging-vpc \
--region=us-central1 --network default --range 10.8.0.0/28

Attach it to your staging backend Cloud Run service:

gcloud run services update flowpos-backend-staging \
--region=us-central1 \
--vpc-connector flowpos-staging-vpc \
--set-env-vars REDIS_HOST=10.8.0.3,REDIS_PORT=6379

✅ Works with the real Redis protocol ✅ Survives restarts ✅ No HTTP conflict


🟡 Option B (for testing only): Wrap Redis in an HTTP Proxy

If you insist on using a Cloud Run Redis container (for isolated testing):

  1. Keep your redis:7-alpine image.
  2. Add a minimal Node.js/Express proxy that listens on $PORT and forwards to Redis (localhost:6379).
  3. Cloud Run will hit / successfully, while you can still connect to Redis via gcloud run services proxy from your laptop.

Example wrapper (server.js):

import express from "express";
import { createClient } from "redis";

const app = express();
const redis = createClient({ url: "redis://localhost:6379" });
await redis.connect();

app.get("/", (req, res) => res.send("Redis proxy alive"));
app.get("/ping", async (_, res) => res.send(await redis.ping()));

app.listen(process.env.PORT || 8080);

Start both Redis + proxy in the same container via supervisord or a small shell script.


🔐 Secrets for staging

Create these once:

echo "10.8.0.3" | gcloud secrets create REDIS_HOST --data-file=-
echo "6379" | gcloud secrets create REDIS_PORT --data-file=-
echo "false" | gcloud secrets create REDIS_TLS_ENABLED --data-file=-

Attach them to your staging backend:

gcloud run services update flowpos-backend-staging \
--region=us-central1 \
--update-secrets \
REDIS_HOST=REDIS_HOST:latest, \
REDIS_PORT=REDIS_PORT:latest, \
REDIS_TLS_ENABLED=REDIS_TLS_ENABLED:latest

⚙️ Code side (NestJS / Node)

import Redis from "ioredis";

const redis = new Redis({
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT || 6379),
tls: process.env.REDIS_TLS_ENABLED === "true" ? {} : undefined,
});

🧹 Clean up later (if you switch to prod)

gcloud redis instances delete flowpos-redis --region=us-central1
gcloud compute networks vpc-access connectors delete flowpos-staging-vpc --region=us-central1

Summary

PurposeUseWorks in Cloud Run?Recommended
Staging / TestingCloud Memorystore for Redis + VPC Connector
Local DevCloud Run container via gcloud run services proxy✅ (locally)🟡
ProdCloud Memorystore for Redis

Would you like me to generate the exact deploy-staging.yml (GitHub Actions) snippet that builds your backend image and deploys it to Cloud Run with the VPC connector + Redis secrets automatically?

Great—since you’re switching to Cloud Memorystore for Redis, here’s exactly what to put into your three secrets and how to fetch them.


What to store in each secret

  • REDIS_HOST → the private IP of your Memorystore instance
  • REDIS_PORT → usually 6379 (you can confirm)
  • REDIS_PASSWORDonly if you enabled AUTH on the instance; otherwise leave unset/empty

How to get the values

Replace INSTANCE_ID (e.g., flowpos-staging-cache) and region if needed.

1) Host (private IP)

gcloud redis instances describe INSTANCE_ID \
--region us-central1 \
--format='value(host)'

Copy that IP into your REDIS_HOST secret. (Docs show host/port are the connection fields.) (Google Cloud)

2) Port (confirm)

gcloud redis instances describe INSTANCE_ID \
--region us-central1 \
--format='value(port)'
# expected: 6379

Default is 6379. (Google Cloud)

3) Password (only if AUTH is enabled)

If you haven’t enabled AUTH yet:

gcloud redis instances update INSTANCE_ID \
--region us-central1 \
--enable-auth \
--quiet

(You can also enable/disable AUTH in the Console.) (Google Cloud)

Then fetch the auto-generated AUTH string:

gcloud redis instances get-auth-string INSTANCE_ID \
--region us-central1 \
--format='value(authString)'

Put that value into your REDIS_PASSWORD secret. (Memorystore generates a 36-char AUTH string when AUTH is enabled.) (Google Cloud)

If you don’t enable AUTH, do not set REDIS_PASSWORD and make sure your app tolerates a missing password.


Where to put the secrets (your workflow)

You’re currently reading REDIS_HOST, REDIS_PORT, REDIS_PASSWORD from GitHub Actions Secrets and passing them to Cloud Run via --set-env-vars in the Deploy Backend step. That’s fine—just set these values in GitHub → Settings → Secrets → Actions.

Even better (for rotation & least-privilege): store these in Secret Manager and attach them at deploy time:

# one-time: create secrets in GCP Secret Manager
printf "10.8.0.3" | gcloud secrets create REDIS_HOST --data-file=-
printf "6379" | gcloud secrets create REDIS_PORT --data-file=-
printf "<AUTH_STRING>" | gcloud secrets create REDIS_PASSWORD --data-file=-

# in your deploy step, use update-secrets (instead of baking into image)
gcloud run deploy flowpos-backend \
... \
--update-secrets \
REDIS_HOST=REDIS_HOST:latest, \
REDIS_PORT=REDIS_PORT:latest, \
REDIS_PASSWORD=REDIS_PASSWORD:latest

(Manage AUTH and password retrieval in the docs; you can enable/disable AUTH and get the string via gcloud or Console.) (Google Cloud)


Connecting from Cloud Run

Cloud Run must reach Memorystore over VPC. Use Direct VPC egress (recommended) or a Serverless VPC connector; both are supported in the official guide. (Google Cloud)


Minimal app config (Node/ioredis example)

import Redis from "ioredis";

const client = new Redis({
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT || 6379),
// Only include password if AUTH is enabled:
password: process.env.REDIS_PASSWORD || undefined,
// If you later enable TLS-in-transit, set tls: {}
// tls: {},
});

Sources

  • Create/manage instances and enable AUTH flag (--enable-auth). (Google Cloud)
  • Get AUTH string and manage AUTH on an instance. (Google Cloud)
  • Default host/port fields (6379). (Google Cloud)
  • Cloud Run ↔ Memorystore connectivity (Direct VPC egress or VPC connector). (Google Cloud)

If you tell me your instance name, I can generate the exact gcloud commands with the concrete values you can paste into your GitHub secrets.

Perfect — this error message tells you exactly what’s wrong and how to fix it ✅

Let’s break it down and fix it step by step.


🧩 What happened

You tried to create a Serverless VPC Connector with:

--range 10.8.0.0/28

But GCP responded:

Invalid IP CIDR range was provided. It conflicts with an existing subnetwork.

That means there’s already a subnet overlapping with 10.8.0.0/28 in your default VPC (or maybe a previously failed connector used that range).


🧹 Step 1: Check existing VPC connectors

Run this command to list all connectors in your project and region:

gcloud compute networks vpc-access connectors list \
--region=us-central1

If you see flowpos-vpc or another connector that failed partially, delete it:

gcloud compute networks vpc-access connectors delete flowpos-vpc \
--region=us-central1

🔍 Step 2: Check your existing subnets

You can confirm which subnet ranges are already used in your default VPC:

gcloud compute networks subnets list \
--network=default \
--format="table(name, region, ipCidrRange)"

Look for any ranges that overlap with 10.8.x.x. Avoid those when you choose your new range.


✅ Step 3: Create the connector again with a non-overlapping range

Pick any unused /28 range that does not overlap with your VPC subnets.

Example (these usually work fine):

# Try something like 10.9.0.0/28 or 10.123.0.0/28
gcloud compute networks vpc-access connectors create flowpos-vpc \
--region=us-central1 \
--network=default \
--range=10.9.0.0/28

If that still fails, increment further (e.g., 10.10.0.0/28, 10.100.0.0/28, etc.).


💡 Step 4: Verify connector creation

After creation succeeds:

gcloud compute networks vpc-access connectors describe flowpos-vpc \
--region=us-central1 \
--format="value(state)"

You should see:

READY

🚀 Step 5: Update your Cloud Run service

Once the connector is ready:

gcloud run services update flowpos-backend \
--region us-central1 \
--vpc-connector flowpos-vpc \
--vpc-egress all-traffic \
--set-env-vars REDIS_HOST=10.117.86.3,REDIS_PORT=6379

Would you like me to help you pick a safe unused IP range automatically based on your current subnet list (you can paste the output of gcloud compute networks subnets list)?

Good check 👏 — that’s actually expected from Cloud Shell, and it tells us something important: you can’t reach Cloud Memorystore (Redis) directly from Cloud Shell because Cloud Shell is outside your VPC network.

Let’s go over what’s happening and how to confirm that Cloud Run will be able to connect instead.


🧩 Why redis-cli fails from Cloud Shell

Cloud Memorystore instances:

  • Do not have a public IP
  • Are only reachable inside the same VPC network (in your case, default)
  • Require the client (like Cloud Run) to connect via a Serverless VPC Connector

Cloud Shell runs on a Google-managed network, not your VPC — so this fails:

redis-cli -h 10.117.86.3 -p 6379 ping
# => (error) Could not connect to Redis

That’s totally normal.


✅ How to verify connectivity properly

Option 1: Run the test from a Cloud Run service

After you create the VPC connector (flowpos-vpc), do this:

  1. Deploy a temporary Cloud Run test service in the same region and VPC connector:
FROM node:22-alpine

WORKDIR /app
RUN npm install ioredis express

# Create test script
RUN echo "import express from 'express'; \
import Redis from 'ioredis'; \
const app = express(); \
const client = new Redis({ host: process.env.REDIS_HOST, port: process.env.REDIS_PORT }); \
app.get('/', async (req, res) => { \
try { const pong = await client.ping(); res.send('Redis says: ' + pong); } \
catch (e) { console.error(e); res.status(500).send('Failed to connect to Redis'); } \
}); \
app.listen(process.env.PORT || 8080, () => console.log('Listening on port ' + (process.env.PORT || 8080)));" > index.mjs

CMD ["node", "index.mjs"]
FROM node:22-alpine

WORKDIR /app
RUN npm install ioredis express

# Create test script
RUN echo "import express from 'express'; \
import Redis from 'ioredis'; \
const app = express(); \
const client = new Redis({ host: process.env.REDIS_HOST, port: process.env.REDIS_PORT }); \
app.get('/', async (req, res) => { \
try { const pong = await client.ping(); res.send('Redis says: ' + pong); } \
catch (e) { console.error(e); res.status(500).send('Failed to connect to Redis'); } \
}); \
app.listen(process.env.PORT || 8080, () => console.log('Listening on port ' + (process.env.PORT || 8080)));" > index.mjs

CMD ["node", "index.mjs"]

Then:

docker build -t us-central1-docker.pkg.dev/barto-dev/flowpos-redis/test:latest .
docker push us-central1-docker.pkg.dev/barto-dev/flowpos-redis/test:latest

gcloud run deploy flowpos-redis-test \
--image us-central1-docker.pkg.dev/barto-dev/flowpos-redis/test:latest \
--region us-central1 \
--vpc-connector flowpos-vpc \
--vpc-egress all-traffic \
--set-env-vars REDIS_HOST=10.117.86.3,REDIS_PORT=6379 \
--allow-unauthenticated

Then check the logs:

gcloud run services logs read flowpos-redis-test --region us-central1

You should see:

PONG
PONG
PONG

That confirms the VPC connector works and Redis is reachable.


Option 2: From your real backend

Once your flowpos-backend service is connected to flowpos-vpc, try connecting in your app code like this:

import Redis from "ioredis";

const client = new Redis({
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT),
// password: process.env.REDIS_PASSWORD, // only if auth enabled
});

client.ping().then(console.log).catch(console.error);

Then check Cloud Run logs:

gcloud run services logs read flowpos-backend --region us-central1

You should see:

PONG

⚡ TL;DR

From whereWorks?Why
Cloud ShellNot in same VPC
Cloud Run (with VPC connector)Private VPC access
VM in same VPCDirect private access

If you want, I can give you a minimal test-redis container you can deploy in one command just to verify connectivity right now — would you like that?