Skip to main content

Next Steps: Cloud SQL Integration

⚠️ DEPRECATED: This document is archived and refers to the old flowpos-db instances.
Current Architecture: Database instances have been consolidated. The application now uses:

  • Staging: metabase-db-staging (hosts both flowpos_staging and metabase databases)
  • Production: metabase-db-production (hosts both flowpos_production and metabase databases)

See deploy/gcp/cloud-sql/README.md for current setup instructions.

✅ Completed

  • Cloud SQL instance created (flowpos-db)
  • Databases created (flowpos_staging, flowpos_production)
  • Users created (flowpos_staging, flowpos_production)
  • Permissions granted

📋 Next Steps

Step 1: Get Connection Strings

Your Cloud SQL instance details:

  • Instance: flowpos-db
  • Private IP: 34.60.215.167
  • Project: barto-dev
  • Region: us-central1

Connection Strings:

Staging:

postgresql://flowpos_staging:YOUR_STAGING_PASSWORD@34.60.215.167:5432/flowpos_staging

Production:

postgresql://flowpos_production:YOUR_PRODUCTION_PASSWORD@34.60.215.167:5432/flowpos_production

Replace YOUR_STAGING_PASSWORD and YOUR_PRODUCTION_PASSWORD with the actual passwords you set.

Step 2: Add to GitHub Secrets

  1. Go to your GitHub repository
  2. Navigate to: SettingsSecrets and variablesActions
  3. For staging environment:
    • Click on "staging" environment (or create it if it doesn't exist)
    • Add/Update secret: DATABASE_URL
    • Value: postgresql://flowpos_staging:PASSWORD@34.60.215.167:5432/flowpos_staging
  4. For production environment:
    • Click on "production" environment (or create it if it doesn't exist)
    • Add/Update secret: DATABASE_URL
    • Value: postgresql://flowpos_production:PASSWORD@34.60.215.167:5432/flowpos_production

Step 3: Update Deployment Workflows

The workflows have been updated to:

  • Add Cloud SQL instance connection (--add-cloudsql-instances)
  • Use the DATABASE_URL from secrets (already configured)

Step 4: Test Connection (Optional)

Test the connection from your local machine:

# Install Cloud SQL Proxy (if not already installed)
brew install cloud-sql-proxy

# Authenticate
gcloud auth application-default login

# Start proxy
/opt/homebrew/bin/cloud-sql-proxy barto-dev:us-central1:flowpos-db --port=5432 &

# Test connection
psql -h 127.0.0.1 -U flowpos_staging -d flowpos_staging

Step 5: Run Migrations

After deployment, migrations will run automatically. Or run manually:

Staging:

MIGRATOR_DATABASE_URL="postgresql://flowpos_staging:PASSWORD@34.60.215.167:5432/flowpos_staging" \
pnpm migration:push

Production:

MIGRATOR_DATABASE_URL="postgresql://flowpos_production:PASSWORD@34.60.215.167:5432/flowpos_production" \
pnpm migration:push

Step 6: Deploy

  1. Push your changes to trigger the deployment workflow
  2. The workflow will:
    • Build the Docker image
    • Deploy to Cloud Run
    • Connect to Cloud SQL via VPC connector
    • Use the DATABASE_URL from secrets

🔍 Verification

After deployment, verify:

  1. Check Cloud Run logs:

    gcloud run services logs read flowpos-backend --region=us-central1 --project=barto-dev
  2. Test API endpoint:

    curl https://your-backend-url/api/health
  3. Check database connection in logs: Look for successful database connection messages

🚨 Troubleshooting

Connection Issues

  1. Verify VPC Connector:

    gcloud compute networks vpc-access connectors describe cloudrun-vpc-connector \
    --region=us-central1 --project=barto-dev
  2. Check Cloud SQL instance:

    gcloud sql instances describe flowpos-db --project=barto-dev
  3. Verify private IP:

    gcloud sql instances describe flowpos-db --project=barto-dev \
    --format="value(ipAddresses[0].ipAddress)"

Permission Issues

If you get permission errors, re-run the grant permissions script:

GRANT ALL PRIVILEGES ON DATABASE flowpos_staging TO flowpos_staging;
GRANT ALL PRIVILEGES ON DATABASE flowpos_production TO flowpos_production;

📝 Notes

  • The connection uses private IP via VPC connector
  • SSL is required for production connections (your code already handles this)
  • Both staging and production share the same Cloud SQL instance but use separate databases
  • You can split into separate instances later when budget allows