Skip to main content

FlowPOS Restaurant POS — v1 Design Document

Purpose: Authoritative design reference covering gap analysis against existing migrations, KDS & printing system design, and resolution of all architectural concerns. Structured for Claude Code / Cursor via /speckit.specify.


Table of Contents

  1. What Is Already Built
  2. Functional Map — Coverage Status
  3. What Needs to Be Added
  4. KDS & Printing System
  5. Resolved Architectural Concerns
  6. Complete DB Changes
  7. Application-Layer Gaps
  8. Module Structure
  9. API Endpoints
  10. Implementation Ordering
  11. Unit Test Coverage Thresholds
  12. Open Question
  13. Speckit Command

1. What Is Already Built

Tables confirmed existing (base restaurant module)

TableKey Columns Known
dining_tableid, business_id, location_id, dining_area_id, status, capacity, alias
dining_areaFloor plan zones/sections per location
orderFull order entity; guest_count, table_alias_snapshot
order_itemLine items; seat_no, hold_until_fired, fired_at, hold_reason, voided_at, void_reason, voided_by, comp_reason, comped_by, comped_at
kitchen_stationStation definition; printer_type, printer_url
product_station_assignmentCategory→station routing; modifier_routing JSONB, dining_area_id

Tables created in uploaded migrations

TableSummary
order_guestPer-seat aliases, notes, allergies_jsonb; unique on (order_id, seat_no)
order_partyServer/runner/bartender role assignments; PK (order_id, role)
menuNamed menus with available_from/to time windows + metadata JSONB
menu_itemproduct_id × menu_id; sort_order, is_available, per-item time windows
location_menu_assignmentWhich menus are active at each location
reservationreserved_at, party_size, customer_id, table_id, status enum, notes
waitlist_entryparty_size, notify_phone/email, estimated_wait_minutes, status enum
price_ruleday_of_week JSONB, start/end_time, discount_type/value, product_ids JSONB, category_ids JSONB

Parameters registered

CodeTypePurpose
RESTAURANT_SEND_TO_KITCHEN_MODEmanual|autoControls fire behavior per location
RESTAURANT_REQUIRE_MANAGER_FOR_VOID_COMPbooleanManager auth gate for voids/comps

Module + UI forms (14 registered)

restaurantDashboard, restaurantDining, restaurantOrders, restaurantKds, restaurantExpo, restaurantKitchenLoad, restaurantPackingScanner, restaurantKitchenStations, restaurantExternalPlatformMappings, restaurantReservations, restaurantWaitlist, restaurantPriceRules, restaurantMenus, restaurantReports

Available from base platform

  • product with inventory_type enum including raw_material
  • inventory + inventory_ledger — per-location stock with full ledger
  • document_counter — sequential numbering with reset_frequency support
  • parameter_catalog / entity_parameter — config system
  • sale with full FEL fields — settlement and invoicing
  • customer, payment_method, currency, tax_definition
  • Redis — already in stack via BullMQ (required for KDS horizontal scaling)

2. Functional Map — Coverage Status

AreaStatusNotes
Table management / floor plan✅ Donedining_table, dining_area
Table alias & seat assignments✅ Donedining_table.alias, order_item.seat_no, order_guest
Order lifecycle✅ Doneorder, order_item
Role assignment per order✅ Doneorder_party
Kitchen stations & printer routing✅ Donekitchen_station + product_station_assignment
Course timing / hold & fire✅ Donehold_until_fired, fired_at, hold_reason on order_item
Comps & voids with manager auth✅ DoneColumns on order_item + parameter
Send-to-kitchen mode (manual/auto)✅ DoneParameter seeded
Menu management with scheduling✅ Donemenu, menu_item, location_menu_assignment
Happy Hour / time-based pricing✅ Doneprice_rule fully capable
Reservations✅ Donereservation
Waitlist✅ Donewaitlist_entry
Guest allergies & notes per seat✅ Doneorder_guest.allergies_jsonb
Sequential order numbering✅ Donedocument_counter reusable
FEL invoice on settlement✅ Donesale + FEL fields; hook in app layer
Ingredient stock tracking✅ Doneproduct (raw_material) + inventory
Modifier groups & modifiers❌ MissingNormalized tables — see §3.1
Bill of Materials (recipe)❌ MissingNormalized table — see §3.2
"86" global product disable❌ Incompletemenu_item.is_available is per-menu — see §3.3
Shift / cash management❌ MissingNew shift table — see §3.4
Split bill❌ MissingJSONB on order — see §3.5
Combo / Meal deals❌ MissingJSONB on product — see §3.6
External platform mapping⚠️ Form onlyTable missing — see §3.7
KDS + multi-cast printing⚠️ PartialExtended schema + app layer — see §4
BOM deduction on settle⚠️ App onlyEvent handler missing
Reporting gaps⚠️ App onlyP-Mix, depletion, shift summary, turnaround

3. What Needs to Be Added

3.1 Modifier Groups & Modifiers — Normalized Tables

Decision: Normalized tables, not JSONB.

product_station_assignment.modifier_routing already stores modifier option IDs to route specific modifier selections to stations. If modifier IDs lived in a JSONB blob, any product edit regenerating them would silently break all station routing for that product with no DB-level protection. Normalized tables give stable UUIDs that modifier_routing can safely reference across its lifetime.

DeleteModifier use-case must check modifier_routing references before deletion.

CREATE TABLE modifier_group (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
product_id UUID NOT NULL REFERENCES product(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
is_required BOOLEAN NOT NULL DEFAULT false,
min_selections INTEGER NOT NULL DEFAULT 0,
max_selections INTEGER NOT NULL DEFAULT 1,
sort_order INTEGER NOT NULL DEFAULT 0,
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
updated_at TIMESTAMPTZ,
updated_by UUID REFERENCES "user"(id) ON DELETE SET NULL
);

CREATE TABLE modifier (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
modifier_group_id UUID NOT NULL REFERENCES modifier_group(id) ON DELETE CASCADE,
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
name VARCHAR NOT NULL,
price_adjustment NUMERIC NOT NULL DEFAULT 0,
ingredient_product_id UUID REFERENCES product(id) ON DELETE SET NULL,
ingredient_qty_delta NUMERIC, -- positive = add, negative = remove
is_default BOOLEAN NOT NULL DEFAULT false,
is_available BOOLEAN NOT NULL DEFAULT true,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
updated_at TIMESTAMPTZ,
updated_by UUID REFERENCES "user"(id) ON DELETE SET NULL
);

-- Immutable snapshot of selected modifiers per order item
CREATE TABLE order_item_modifier (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
order_item_id UUID NOT NULL REFERENCES order_item(id) ON DELETE CASCADE,
modifier_id UUID NOT NULL REFERENCES modifier(id) ON DELETE RESTRICT,
modifier_group_id UUID NOT NULL REFERENCES modifier_group(id) ON DELETE RESTRICT,
name_snapshot VARCHAR NOT NULL, -- protects history from future name changes
price_snapshot NUMERIC NOT NULL
);

CREATE INDEX idx_modifier_group_product ON modifier_group(product_id);
CREATE INDEX idx_modifier_modifier_group ON modifier(modifier_group_id);
CREATE INDEX idx_order_item_modifier_item ON order_item_modifier(order_item_id);

modifier_routing JSONB maps modifier.id → kitchen_station.id:

{ "modifier-uuid-1": "station-uuid-grill", "modifier-uuid-2": "station-uuid-bar" }

Operational constraint — modifier deletion: order_item_modifier uses ON DELETE RESTRICT on modifier_id and modifier_group_id. This is intentional for referential integrity, but it means a modifier that was ever selected on any historical order cannot be hard-deleted. This will surface the first time a client tries to remove a seasonal modifier group months after launch.

The required pattern is soft deletion: set modifier.is_active = false and modifier_group.is_active = false. The UI must filter out inactive modifiers from the ordering flow. DeleteModifier and DeleteModifierGroup use-cases must:

  1. Check modifier_routing references (existing guard)
  2. Check order_item_modifier rows — if any exist, soft-delete (is_active = false) rather than hard-delete, and return a 204 with a warning body indicating the modifier was archived, not deleted

This must be documented in the PWA UI as "Archive" rather than "Delete" for modifiers that have order history.


3.2 Recipe / Bill of Materials — Normalized Join Table

Decision: Normalized recipe table, not JSONB.

Two problems drove this: no FK enforcement means a deleted ingredient silently breaks deductions; and the depletion report aggregates recipe lines across thousands of order_item rows — jsonb_array_elements at 300+ covers/day is unindexable and becomes a bottleneck. A join table gives ON DELETE RESTRICT and makes depletion reporting a standard indexed aggregate.

DeleteProduct (ingredient) use-case must check recipe.ingredient_product_id before deletion.

CREATE TABLE recipe (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
product_id UUID NOT NULL REFERENCES product(id) ON DELETE CASCADE,
ingredient_product_id UUID NOT NULL REFERENCES product(id) ON DELETE RESTRICT,
quantity NUMERIC NOT NULL,
unit_of_measure_id UUID NOT NULL REFERENCES unit_of_measure(id),
is_optional BOOLEAN NOT NULL DEFAULT false,
-- optional = removable via "No [X]" modifier without changing base price
created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
updated_at TIMESTAMPTZ,
updated_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
UNIQUE (product_id, ingredient_product_id)
);

CREATE INDEX idx_recipe_product ON recipe(product_id);
CREATE INDEX idx_recipe_ingredient ON recipe(ingredient_product_id);

Point-in-time snapshot on order_item — JSONB here because it's a historical receipt copy that is never aggregated, only displayed:

ALTER TABLE order_item ADD COLUMN recipe_snapshot_jsonb JSONB DEFAULT '[]'::jsonb;
[
{
"ingredientProductId": "uuid",
"ingredientName": "Carne molida",
"quantity": 0.150,
"unitAbbreviation": "kg",
"isOptional": false,
"modifierQtyDelta": -0.150
}
]

3.3 Global "86" Flag — Column on product

Decision: product.is_86d BOOLEAN, separate from menu_item.is_available.

menu_item.is_available is per-row. A product on three menus requires three separate toggles — no single lever exists. The functional map requires a global emergency disable across all terminals and all menus simultaneously.

ALTER TABLE product ADD COLUMN is_86d BOOLEAN NOT NULL DEFAULT false;
  • is_86d = true → globally unavailable regardless of menu_item.is_available
  • is_86d = false → falls through to per-menu menu_item.is_available as before
  • Setting is_86d broadcasts product:86 to all location terminals via KDS gateway
  • Un-86ing resets to false; individual menu_item.is_available flags are untouched

3.4 Shift / Cash Management — New Table

CREATE TABLE shift (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
location_id UUID NOT NULL REFERENCES location(id) ON DELETE CASCADE,
staff_id UUID NOT NULL REFERENCES "user"(id) ON DELETE CASCADE,
status VARCHAR NOT NULL DEFAULT 'open', -- 'open' | 'closed'
opened_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
closed_at TIMESTAMPTZ,
opening_float NUMERIC NOT NULL DEFAULT 0,
declared_cash NUMERIC, -- blind drop: entered without seeing expected
expected_cash NUMERIC, -- opening_float + sum of cash sales in shift window
cash_variance NUMERIC, -- declared − expected; populated on close
notes TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
updated_at TIMESTAMPTZ,
updated_by UUID REFERENCES "user"(id) ON DELETE SET NULL
);

CREATE INDEX idx_shift_location_status ON shift(location_id, status);

Parameter to seed:

RESTAURANT_REQUIRE_SHIFT_OPEN_TO_SELL | boolean | default: false
Blocks order creation if no open shift exists for the current staff member

3.5 Split Bill — JSONB on order

JSONB is appropriate: always read/written with the parent order, never queried independently. Check items reference order_item.id values within the same order — no external FK risk.

ALTER TABLE "order" ADD COLUMN split_config_jsonb JSONB;

Schema (null when unsplit):

{
"splitType": "by_item",
"checks": [
{
"checkId": "uuid-v4",
"label": "Mesa A",
"itemIds": ["order_item_id_1", "order_item_id_2"],
"subtotal": 85.00,
"isPaid": false,
"paidAt": null,
"paymentMethodId": null
}
]
}

splitType: by_item | equal | by_seat


3.6 Combo / Meal Deals — JSONB on product

JSONB is appropriate: no routing dependencies, no FK requirements, always read with the product. The combo definition holds soft references used only for display and expansion logic in AddItemsToOrder.

ALTER TABLE product ADD COLUMN combo_items_jsonb JSONB;

Schema (null when not a combo):

[
{ "productId": "uuid", "productName": "Hamburguesa", "quantity": 1 },
{ "productId": "uuid", "productName": "Papas fritas", "quantity": 1 },
{ "productId": "uuid", "productName": "Bebida", "quantity": 1 }
]

When a combo is added to an order, AddItemsToOrder expands it into individual order_item rows for KDS routing and BOM deduction.


3.7 External Platform Mapping — New Table

Independently queryable by platform + external ID for incoming webhook matching. Cannot be collapsed into JSONB.

CREATE TABLE external_platform_mapping (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
location_id UUID NOT NULL REFERENCES location(id) ON DELETE CASCADE,
platform VARCHAR NOT NULL, -- 'uber_eats' | 'rappi' | 'doordash'
internal_entity_type VARCHAR NOT NULL, -- 'product' | 'category' | 'modifier'
internal_entity_id UUID NOT NULL,
external_id VARCHAR NOT NULL,
external_name VARCHAR,
metadata_jsonb JSONB DEFAULT '{}'::jsonb,
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
created_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
updated_at TIMESTAMPTZ,
updated_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
UNIQUE (business_id, platform, internal_entity_type, internal_entity_id)
);

CREATE INDEX idx_external_platform_mapping_lookup
ON external_platform_mapping(platform, external_id, location_id);

4. KDS & Printing System

4.1 Resolved Concerns

#ConcernResolution
1Cloud Run horizontal scaling breaks WebSocket eventsSocket.IO Redis adapter wired at gateway bootstrap
2bcrypt too slow for device token verificationSHA-256 — sufficient for high-entropy UUID tokens
3kds_pairing_code DB table unnecessaryRedis TTL key (SET ... EX 600 NX) — Redis already in stack
4Circular fallback reference unguardedUpdateStation use-case validates no cycle; routing engine caps at one hop
5additional_station_ids stale IDs drop tickets silentlyDeleteStation use-case guards against active references
6findPendingByStation requires complex routing re-evaluationkitchen_ticket table written at fire time; reconnect is a simple indexed query

4.2 Extend kitchen_station

ALTER TABLE kitchen_station
ADD COLUMN output_type VARCHAR NOT NULL DEFAULT 'printer',
-- 'printer' | 'kds' | 'both'
ADD COLUMN printer_config_jsonb JSONB DEFAULT '{}'::jsonb,
ADD COLUMN fallback_station_id UUID REFERENCES kitchen_station(id) ON DELETE SET NULL,
-- circular reference prevented by UpdateStation use-case guard, not DB constraint
ADD COLUMN printer_status VARCHAR DEFAULT 'unknown',
-- 'online' | 'offline' | 'unknown'
ADD COLUMN printer_checked_at TIMESTAMPTZ,
ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true,
ADD COLUMN sort_order INTEGER NOT NULL DEFAULT 0;

printer_config_jsonb schema:

{
"paperWidthMm": 80,
"copyCount": 1,
"encoding": "utf8",
"cutAfterEach": true,
"openCashDrawer": false,
"headerLines": ["Restaurante El Patio", "Zona 10"],
"fontSize": "normal"
}

4.3 Extend product_station_assignment — Multi-Cast

ALTER TABLE product_station_assignment
ADD COLUMN additional_station_ids JSONB DEFAULT '[]'::jsonb;
-- All stations in this array receive a ticket copy in addition to the primary station
-- e.g. Hamburguesa → grill (primary) + expo (additional)

DeleteStation use-case must scan additional_station_ids across all assignments for the being-deleted station ID and reject if found, preventing silent ticket drops.


4.4 New Table: kitchen_ticket

Written by OnOrderSentToKitchen — one row per (order_item, station) at fire time. Resolves the reconnect problem: pending tickets are a simple indexed query, not a full routing re-evaluation. Also enables ticket aging timers, bump history, and table turnaround time reporting.

CREATE TABLE kitchen_ticket (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
order_id UUID NOT NULL REFERENCES "order"(id) ON DELETE CASCADE,
order_item_id UUID NOT NULL REFERENCES order_item(id) ON DELETE CASCADE,
kitchen_station_id UUID NOT NULL REFERENCES kitchen_station(id) ON DELETE CASCADE,
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
location_id UUID NOT NULL REFERENCES location(id) ON DELETE CASCADE,
status VARCHAR NOT NULL DEFAULT 'pending',
-- 'pending' | 'ready' | 'bumped' | 'voided'
fired_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
ready_at TIMESTAMPTZ,
bumped_at TIMESTAMPTZ,
bumped_by UUID REFERENCES "user"(id) ON DELETE SET NULL,
ticket_data_jsonb JSONB NOT NULL
-- snapshot: item name, modifiers, notes, seat_no, course, order_number, table_alias
);

CREATE INDEX idx_kitchen_ticket_station_status
ON kitchen_ticket(kitchen_station_id, status);
CREATE INDEX idx_kitchen_ticket_order
ON kitchen_ticket(order_id);

Post-fire item modification policy: When an order item is modified after its kitchen_ticket has already been written (e.g. a customer adds a modifier after the ticket is on the KDS screen), the handler must:

  1. Void the existing kitchen_ticket row (status = 'voided') for that item+station
  2. Write a new kitchen_ticket row with the updated ticket_data_jsonb
  3. Add "isModification": true and "modifiedAt": "{timestamp}" to ticket_data_jsonb so the KDS screen can display a visual "MODIFIED" flag

The KDS client should render modified tickets with a distinct colour (e.g. amber border) to alert the chef that this item replaced a previous ticket. The old voided row is retained for the turnaround time report.

Do NOT update the existing row in-place — the immutable-append pattern keeps the full audit trail and prevents the KDS screen from missing the modification event if the update arrives between a bump and a reconnect.

ticket_data_jsonb schema:

{
"orderItemId": "uuid",
"orderNumber": 42,
"orderType": "dine_in",
"tableAlias": "T-04",
"seatNo": 2,
"itemName": "Hamburguesa Especial",
"quantity": 1,
"modifiers": [
{ "groupName": "Término", "optionName": "Tres cuartos" },
{ "groupName": "Extras", "optionName": "Sin cebolla" }
],
"notes": "Sin gluten si es posible",
"courseNumber": 2,
"holdUntilFired": false,
"isModification": false,
"modifiedAt": null
}

orderItemId is included so the frontend can match modification tickets and order:item:ready events by item identity rather than by kitchen_ticket.id (which changes on each new row). isModification and modifiedAt are also included here so the WebSocket event payload is self-contained — the frontend does not need a separate field on the event wrapper.


4.5 New Table: kds_device

Physical KDS screen (tablet, monitor, Raspberry Pi) registered to a station. Uses SHA-256 token hash — not bcrypt, which is intentionally slow for passwords but unnecessary overhead for high-entropy UUID tokens on frequent reconnects.

CREATE TABLE kds_device (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
business_id UUID NOT NULL REFERENCES business(id) ON DELETE CASCADE,
location_id UUID NOT NULL REFERENCES location(id) ON DELETE CASCADE,
kitchen_station_id UUID NOT NULL REFERENCES kitchen_station(id) ON DELETE CASCADE,
device_name VARCHAR NOT NULL,
device_token_hash VARCHAR NOT NULL UNIQUE, -- SHA-256 hex of the device token
last_seen_at TIMESTAMPTZ,
is_active BOOLEAN NOT NULL DEFAULT true,
registered_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
registered_by UUID REFERENCES "user"(id) ON DELETE SET NULL
);

CREATE INDEX idx_kds_device_station ON kds_device(kitchen_station_id);
CREATE INDEX idx_kds_device_location ON kds_device(location_id, is_active);

4.6 Device Pairing — Redis TTL Key

Redis is already in the stack via BullMQ. No DB table needed for a 10-minute ephemeral code — a TTL key is the right tool.

Key:    kds:pair:{6-digit-code}
Value: {stationId}:{locationId}:{businessId}
TTL: 600 seconds
NX: prevents overwriting an active code if two managers generate simultaneously
// GenerateKdsPairingCodeUseCase
const code = randomInt(100000, 999999).toString();
await redis.set(`kds:pair:${code}`, `${stationId}:${locationId}:${businessId}`,
'EX', 600, 'NX');
return { code, expiresInSeconds: 600 };

// RegisterKdsDeviceUseCase
const value = await redis.getdel(`kds:pair:${pairingCode}`);
if (!value) throw new InvalidPairingCodeException();
const [stationId, locationId, businessId] = value.split(':');
const deviceToken = randomUUID();
const tokenHash = createHash('sha256').update(deviceToken).digest('hex');
await this.kdsDeviceRepo.create({ businessId, locationId, stationId, tokenHash });
return { deviceToken }; // returned once, never stored in plain form

Device recovery (tablet wiped / replaced): The deviceToken is returned once and never recoverable — this is the correct security posture. When a tablet is reset or replaced, staff must:

  1. Manager opens PWA → Kitchen Stations → Devices → Deregister the old device (sets kds_device.is_active = false, any active WebSocket connection is disconnected by the gateway on next heartbeat)
  2. Manager generates a new pairing code for the same station
  3. Staff enter the code on the replacement tablet

The DeregisterKdsDevice use-case must also call this.kdsGateway.disconnectDevice(deviceId) to force-disconnect any lingering session from the old device. This is a two-minute UI addition that must be in scope — a kitchen tablet being dropped and reset on a Friday service is a real scenario, not an edge case.


4.7 Multi-Tenant WebSocket Isolation

Room naming:

rst:{businessId}:{locationId}:{stationId}   — station-specific (KDS devices)
rst:{businessId}:{locationId}:all — all terminals at this branch

Room names include businessId — structural isolation is guaranteed at the framework level. A message to rst:tenant-A:... cannot reach rst:tenant-B:... regardless of shared infrastructure. No application-layer filtering needed.

Cloud Run horizontal scaling — Socket.IO Redis adapter (blocker):

Without this, events emitted on instance A never reach KDS devices on instance B. Wire before any connection is accepted:

import { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';

const pubClient = createClient({ url: process.env.REDIS_URL });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);
const io = app.get(Server);
io.adapter(createAdapter(pubClient, subClient));

All room emit calls remain identical — the adapter handles fan-out transparently.


4.8 KdsGateway

@WebSocketGateway({ namespace: '/kds', cors: { origin: '*' } })
export class KdsGateway implements OnGatewayConnection, OnGatewayDisconnect {

async handleConnection(client: Socket) {
const token = client.handshake.headers.authorization?.replace('Bearer ', '');
if (!token) return client.disconnect();

// Device token path
const tokenHash = createHash('sha256').update(token).digest('hex');
const device = await this.kdsDeviceRepo.findByTokenHash(tokenHash);

if (device?.isActive) {
client.data = { type: 'device', ...device };
await client.join([
`rst:${device.businessId}:${device.locationId}:${device.kitchenStationId}`,
`rst:${device.businessId}:${device.locationId}:all`,
]);
await this.kdsDeviceRepo.updateLastSeen(device.id);
// Deliver pending tickets — simple indexed query on kitchen_ticket table
const pending = await this.kitchenTicketRepo
.findPendingByStation(device.kitchenStationId);
client.emit('pending_tickets', pending);
return;
}

// Firebase token path (POS terminal / manager)
const claims = await this.firebaseAuth.verifyIdToken(token).catch(() => null);
if (!claims) return client.disconnect();
const { businessId, locationId } = extractRestaurantClaims(claims);
await client.join(`rst:${businessId}:${locationId}:all`);
client.data = { type: 'terminal', businessId, locationId };
}

handleDisconnect(_client: Socket) {
// Socket.IO cleans up room memberships automatically
}
}

4.9 Ticket Routing Engine

TicketRoutingService is a pure domain service — no I/O, 100% testable.

Routing priority (highest to lowest):

1. modifier_routing JSONB    — specific modifier selection overrides primary station
2. dining_area_id — area-specific assignment (bar, terrace)
3. primary kitchen_station_id — base station for this product
4. additional_station_ids — always receives a copy (multi-cast e.g. expo)
5. fallback_station_id — used if winning station has printer_status = 'offline'

Circular fallback guard in UpdateStation use-case:

async validateNoFallbackCycle(
stationId: string,
proposedFallbackId: string,
allStations: KitchenStation[],
): Promise<void> {
let current = proposedFallbackId;
const visited = new Set<string>([stationId]);
while (current) {
if (visited.has(current)) throw new CircularFallbackException();
visited.add(current);
current = allStations.find(s => s.id === current)?.fallbackStationId;
}
}

Routing engine caps fallback at one hop explicitly:

if (primaryStation?.printerStatus === 'offline' && primaryStation?.fallbackStationId) {
primaryStationId = primaryStation.fallbackStationId;
// Do NOT follow fallback.fallbackStationId — one hop only
}

4.10 OnOrderSentToKitchen Event Handler

Transaction boundary: The handler must not write kitchen_ticket rows and call WebSocket/printer adapters in a single loop without a durability strategy. If the loop fails halfway through (station 1 succeeds, station 2 throws), the order is in a partially-fired state with no recovery path — some stations have tickets, others don't. WebSocket and printer calls are also side effects that cannot be rolled back.

Decision: dispatch one BullMQ job per (order_item × station) pair. Each job is independent, retriable, and atomic. The DB write (kitchen_ticket insert) and the side effects (KDS push + print) happen within a single job execution. If it fails, it retries with the standard BullMQ backoff. This matches how the rest of the platform handles durable side effects and guarantees that a network blip during fire never leaves an order partially delivered to the kitchen.

// OnOrderSentToKitchen: dispatches jobs, does not do I/O itself
@EventHandler(OrderSentToKitchenEvent)
export class OnOrderSentToKitchenHandler {
async handle(event: OrderSentToKitchenEvent): Promise<void> {
const items = await this.orderItemRepo.findByOrderId(event.orderId);
const assignments = await this.stationAssignmentRepo.findByLocation(event.locationId);
const stations = await this.kitchenStationRepo.findByLocation(event.locationId);

for (const item of items) {
if (item.holdUntilFired) continue;

const modifierIds = await this.orderItemModifierRepo
.findByOrderItem(item.id).then(r => r.map(m => m.modifierId));

const stationIds = this.ticketRoutingService.resolveStations(
item, modifierIds, event.diningAreaId, event.orderType, assignments, stations,
);

for (const stationId of stationIds) {
// Dispatch one independent, retriable job per station
await this.queue.add('deliver-kitchen-ticket', {
orderId: event.orderId,
orderItemId: item.id,
stationId,
businessId: event.businessId,
locationId: event.locationId,
ticketData: this.buildTicketData(item, event),
}, { attempts: 3, backoff: { type: 'exponential', delay: 2000 } });
}
}
}
}

// DeliverKitchenTicketJob: atomic per station — DB write + side effects
@Processor('restaurant')
export class DeliverKitchenTicketJob {
@Process('deliver-kitchen-ticket')
async run(job: Job<DeliverKitchenTicketPayload>) {
const { orderId, orderItemId, stationId, businessId, locationId, ticketData } = job.data;
const station = await this.kitchenStationRepo.findById(stationId);
if (!station?.isActive) return;

// Write kitchen_ticket row — idempotent via unique constraint on (order_item_id, station_id, status='pending')
await this.kitchenTicketRepo.createIfNotExists({
orderId, orderItemId, kitchenStationId: stationId,
businessId, locationId, ticketDataJsonb: ticketData,
});

if (['kds', 'both'].includes(station.outputType)) {
await this.kdsNotification.sendToStation(businessId, locationId, stationId, ticketData);
}

if (['printer', 'both'].includes(station.outputType)) {
const result = await this.printerFactory.create(station)
.printKitchenTicket(ticketData, station);
if (!result.success) {
await this.kitchenStationRepo.updatePrinterStatus(stationId, 'offline');
await this.kdsNotification.broadcastToLocation(businessId, locationId,
'printer:offline', { stationId, stationName: station.name,
fallbackStationId: station.fallbackStationId });
}
}
}
}

Add a unique constraint to kitchen_ticket to make the job idempotent on retry:

ALTER TABLE kitchen_ticket
ADD CONSTRAINT uq_kitchen_ticket_item_station_pending
UNIQUE (order_item_id, kitchen_station_id)
WHERE status = 'pending';
-- Partial unique index: only one pending ticket per (item, station) at a time
-- Voided + new rows for post-fire modifications are permitted

4.11 IPrinterService Port & Adapters

export interface IPrinterService {
printKitchenTicket(ticket: KitchenTicketData, station: KitchenStation): Promise<PrintResult>;
printReceipt(receipt: OrderReceiptData, station: KitchenStation): Promise<PrintResult>;
checkHealth(station: KitchenStation): Promise<'online' | 'offline' | 'unknown'>;
}

export interface PrintResult {
success: boolean;
error?: string;
stationId: string;
}

PrinterAdapterFactory — per-station instances, not a global singleton. Each branch station gets its own adapter scoped to its config. Branch A's grill printer and Branch B's grill printer are entirely independent.

@Injectable()
export class PrinterAdapterFactory {
create(station: KitchenStation): IPrinterService {
if (station.printerUrl && ['printer', 'both'].includes(station.outputType)) {
return new NetworkThermalPrinterAdapter(station.printerUrl, station.printerConfigJsonb);
}
return new StubPrinterAdapter();
}
}
  • StubPrinterAdapter — always returns { success: true }, logs content. Default for test environments and KDS-only stations.
  • NetworkThermalPrinterAdapter — connects via TCP, sends ESC/POS. Returns { success: false } on failure; handler updates status and broadcasts alert.

4.12 IKdsNotificationService Port

export interface IKdsNotificationService {
sendToStation(businessId: string, locationId: string,
stationId: string, ticket: KitchenTicketData): Promise<void>;
broadcastToLocation(businessId: string, locationId: string,
event: string, payload: unknown): Promise<void>;
}

// WebsocketKdsNotificationAdapter
sendToStation(businessId, locationId, stationId, ticket) {
this.server.to(`rst:${businessId}:${locationId}:${stationId}`).emit('order:ticket', ticket);
}
broadcastToLocation(businessId, locationId, event, payload) {
this.server.to(`rst:${businessId}:${locationId}:all`).emit(event, payload);
// Events: product:86, printer:offline, order status changes
}

4.13 Printer Health BullMQ Job

Runs every 60 seconds per active location. Scoped to restaurant-module tenants only. Broadcasts only on status change to avoid flooding the location room with repeated identical alerts. Uses exponential backoff when a station is persistently offline to prevent unnecessary TCP connection attempts and notification spam.

@Processor('restaurant')
export class PrinterHealthCheckJob {
@Process('check-printer-health')
async run(job: Job<{ locationId: string; businessId: string; consecutiveFailures?: number }>) {
const stations = await this.stationRepo.findPrinterStations(job.data.locationId);
for (const station of stations) {
const newStatus = await this.printerFactory.create(station).checkHealth(station);
if (newStatus === station.printerStatus) continue;
await this.stationRepo.updatePrinterStatus(station.id, newStatus);
if (newStatus === 'offline') {
await this.kdsNotification.broadcastToLocation(
job.data.businessId, job.data.locationId, 'printer:offline',
{ stationId: station.id, stationName: station.name,
fallbackStationId: station.fallbackStationId });
}
}
}
}

Backoff strategy: Managed via BullMQ job delay — no DB column needed.

Consecutive offline checksNext check delay
0–260 seconds (standard)
3–95 minutes
10+15 minutes

When a station recovers (newStatus = 'online'), the scheduler resets to 60s and broadcasts printer:online to the location room. The consecutive failure count is carried in the BullMQ job data payload and reset on recovery.


4.14 Multi-Tenant Isolation Summary

ConcernSolutionGuarantee
WebSocket event isolationRoom names include businessIdStructural — framework enforced
Cross-instance event deliverySocket.IO Redis adapterInfrastructure — all instances fan out
KDS device authSHA-256 token hash lookupCryptographic — 122 bits entropy
Pairing code securityRedis TTL + NX flag + GETDELTemporal — 10-minute expiry, single use
Printer config isolationkitchen_station scoped to business_id + location_idDB constraint
Routing config isolationproduct_station_assignment scoped to business_idDB constraint
Stale multicast targetsDeleteStation use-case guardApplication layer
Circular fallbackUpdateStation guard + one-hop cap in routing engineApplication layer
Health check isolationJob runs per locationId for active restaurant tenants onlyScheduling scope

5. Resolved Architectural Decisions

5.1 Restaurant Settlement Reuses the sale Table

Restaurant order settlement creates a sale record via the existing SaleService. This pulls in FEL, payment reconciliation, and Metabase reporting with no parallel settlement path to maintain. sale.reference_id links the sale back to the order.

5.2 Raw Ingredients = Products with inventory_type = 'raw_material'

No separate ingredient table. inventory + inventory_ledger already tracks stock. BOM deduction on OrderSettled writes inventory_ledger entries with source_type = 'restaurant_order' against raw_material products.

5.3 BOM Deduction on OrderSettled, Not OrderFired

Orders are frequently modified before payment. recipe_snapshot_jsonb on order_item is populated at item-add time and is immutable thereafter — it captures the deduction formula regardless of subsequent recipe changes.

5.4 Order Numbers Reuse document_counter

document_counter with document_type = 'restaurant_order' and reset_frequency = 'daily' per location. No new table needed.


6. Complete DB Changes

ChangeTypeNote
modifier_groupNEW TABLENormalized — stable IDs for modifier_routing
modifierNEW TABLEON DELETE RESTRICT from modifier_routing guard
order_item_modifierNEW TABLEImmutable snapshot join
recipeNEW TABLENormalized — ON DELETE RESTRICT on ingredient
order_item.recipe_snapshot_jsonbADD COLUMNPoint-in-time BOM snapshot
product.is_86dADD COLUMNGlobal product-level "86" flag
product.combo_items_jsonbADD COLUMNCombo bundle definition
order.split_config_jsonbADD COLUMNSplit bill configuration
shiftNEW TABLEShift cash management
external_platform_mappingNEW TABLEUber Eats / Rappi ID mapping
kitchen_station.output_typeADD COLUMNprinter / kds / both
kitchen_station.printer_config_jsonbADD COLUMNPaper width, copies, encoding
kitchen_station.fallback_station_idADD COLUMNOffline fallback station
kitchen_station.printer_statusADD COLUMNonline / offline / unknown
kitchen_station.printer_checked_atADD COLUMNLast health check timestamp
kitchen_station.is_activeADD COLUMNSoft disable
kitchen_station.sort_orderADD COLUMNDisplay ordering
product_station_assignment.additional_station_idsADD COLUMNMulti-cast targets
kitchen_ticketNEW TABLEPer-station fire record; reconnect + reporting
kds_deviceNEW TABLEPhysical screen registration
RESTAURANT_REQUIRE_SHIFT_OPEN_TO_SELLSEED paramShift enforcement

Zero breaking changes. All existing tables unchanged beyond additive columns. Redis key kds:pair:{code} with 600s TTL — no migration needed.


7. Application-Layer Gaps

GapNotes
Socket.IO Redis adapterWire at bootstrap — Cloud Run blocker, do first
KDS WebSocket Gateway@WebSocketGateway /kds; rooms rst:{biz}:{loc}:{station}
kitchen_ticket write on fireOne row per (order_item, station) at fire time
"86" broadcastSet product.is_86d; emit product:86 to location room
BOM deduction on OrderSettledrecipe_snapshot_jsonbinventory_ledger inserts
FEL hook on OrderSettledElectronicCertificationProvidersale record
Order number via document_countertype: restaurant_order, reset: daily, per location
ModifierValidationServiceEnforce required groups, min/max before AddItems
BomDeductionServiceCalculate inventory_ledger entries from snapshot
TicketRoutingServicePure — modifier → area → primary → multicast → fallback
PriceRuleEvaluationServiceEvaluate active price_rule rows on item add
ShiftReconciliationServiceexpected_cash = opening_float + SUM(cash sales)
PrinterAdapterFactoryPer-station factory — see §4.11
Printer health BullMQ jobPer location, 60s interval — see §4.13
DeleteModifier guardCheck modifier_routing references before deletion
DeleteStation guardCheck additional_station_ids references before deletion
DeleteProduct (ingredient) guardCheck recipe references before deletion
Circular fallback guardValidate no cycle when setting fallback_station_id
Combo expansion on AddItemsExpand combo_items_jsonb into individual items
Price rule evaluation on AddItemsApply price_rule discounts at item add time
P-Mix reportGROUP BY product on order_item → volume × margin
Ingredient depletion reportAggregate inventory_ledger (source_type = restaurant_order)
Shift summary reportCash variance + sales total per shift
Table turnaround reportkitchen_ticket.fired_atready_at per table/shift

8. Module Structure

apps/backend/src/modules/restaurant/
├── domain/
│ ├── entities/
│ │ ├── modifier-group.entity.ts ← NEW
│ │ ├── modifier.entity.ts ← NEW
│ │ ├── order-item-modifier.entity.ts ← NEW
│ │ ├── recipe.entity.ts ← NEW
│ │ ├── kitchen-ticket.entity.ts ← NEW
│ │ ├── kds-device.entity.ts ← NEW
│ │ ├── shift.entity.ts ← NEW
│ │ └── external-platform-mapping.entity.ts ← NEW
│ ├── value-objects/
│ │ ├── kitchen-ticket-data.vo.ts ← NEW
│ │ └── split-check.vo.ts ← NEW
│ ├── services/
│ │ ├── ticket-routing.service.ts ← NEW (100% coverage, pure)
│ │ ├── modifier-validation.service.ts ← NEW (100% coverage)
│ │ ├── bom-deduction.service.ts ← NEW (100% coverage)
│ │ ├── price-rule-evaluation.service.ts ← NEW (100% coverage)
│ │ └── shift-reconciliation.service.ts ← NEW (100% coverage)
│ └── ports/
│ ├── modifier-group.repository.port.ts ← NEW
│ ├── modifier.repository.port.ts ← NEW
│ ├── recipe.repository.port.ts ← NEW
│ ├── kitchen-ticket.repository.port.ts ← NEW
│ ├── kds-device.repository.port.ts ← NEW
│ ├── shift.repository.port.ts ← NEW
│ ├── external-platform-mapping.repository.port.ts ← NEW
│ ├── kds-notification.port.ts ← NEW
│ └── printer.service.port.ts ← NEW (IPrinterService)

├── application/
│ ├── use-cases/
│ │ ├── add-items-to-order.use-case.ts ← EXTEND: validation + snapshots
│ │ ├── settle-order.use-case.ts ← EXTEND: emit OrderSettled
│ │ ├── toggle-product-86.use-case.ts ← NEW
│ │ ├── split-order.use-case.ts ← NEW
│ │ ├── open-shift.use-case.ts ← NEW
│ │ ├── close-shift.use-case.ts ← NEW
│ │ ├── bump-kitchen-ticket.use-case.ts ← NEW
│ │ ├── recall-kitchen-ticket.use-case.ts ← NEW
│ │ ├── generate-kds-pairing-code.use-case.ts ← NEW
│ │ └── register-kds-device.use-case.ts ← NEW
│ └── event-handlers/
│ ├── on-order-sent-to-kitchen.handler.ts ← NEW
│ ├── on-order-settled.handler.ts ← NEW
│ └── on-product-86-toggled.handler.ts ← NEW

├── infrastructure/
│ ├── persistence/
│ │ ├── kysely-modifier-group.repository.ts ← NEW
│ │ ├── kysely-modifier.repository.ts ← NEW
│ │ ├── kysely-recipe.repository.ts ← NEW
│ │ ├── kysely-kitchen-ticket.repository.ts ← NEW
│ │ ├── kysely-kds-device.repository.ts ← NEW
│ │ ├── kysely-shift.repository.ts ← NEW
│ │ └── kysely-external-platform.repository.ts ← NEW
│ └── adapters/
│ ├── websocket-kds-notification.adapter.ts ← NEW
│ ├── network-thermal-printer.adapter.ts ← NEW
│ ├── stub-printer.adapter.ts ← NEW
│ └── printer.factory.ts ← NEW

└── interfaces/
├── controllers/
│ ├── modifier.controller.ts ← NEW
│ ├── recipe.controller.ts ← NEW
│ ├── shift.controller.ts ← NEW
│ ├── kds-device.controller.ts ← NEW
│ └── external-platform.controller.ts ← NEW
└── gateways/
└── kds.gateway.ts ← NEW (@WebSocketGateway /kds)

9. API Endpoints

# Modifiers
GET /restaurant/products/:id/modifier-groups
POST /restaurant/products/:id/modifier-groups
PATCH /restaurant/modifier-groups/:id
DELETE /restaurant/modifier-groups/:id -- guard: modifier_routing refs
POST /restaurant/modifier-groups/:id/modifiers
PATCH /restaurant/modifiers/:id
DELETE /restaurant/modifiers/:id -- guard: modifier_routing refs

# Recipe / BOM
GET /restaurant/products/:id/recipe
PUT /restaurant/products/:id/recipe -- full replace (upsert lines)
DELETE /restaurant/products/:id/recipe/:recipeId

# "86"
PATCH /restaurant/products/:id/86 -- { is86d: boolean }

# Split bill
POST /restaurant/orders/:id/split
POST /restaurant/orders/:id/checks/:checkId/pay

# Shift
POST /restaurant/shifts/open
POST /restaurant/shifts/:id/blind-drop
POST /restaurant/shifts/:id/close
GET /restaurant/shifts/:id/summary
GET /restaurant/shifts

# Station configuration
PATCH /restaurant/stations/:id/output-config -- { outputType, printerConfig, fallbackStationId }
POST /restaurant/stations/:id/test-print
GET /restaurant/stations/printer-health

# KDS device management
POST /restaurant/stations/:id/pairing-code -- generates Redis TTL code
POST /restaurant/devices/register -- { pairingCode } → { deviceToken }
GET /restaurant/devices -- list for location
DELETE /restaurant/devices/:id -- deregister + force disconnect

# Kitchen tickets
GET /restaurant/tickets -- active (filter: stationId, status)
PATCH /restaurant/tickets/:id/bump
PATCH /restaurant/tickets/:id/recall

# External platform mappings
GET /restaurant/platform-mappings
POST /restaurant/platform-mappings
PUT /restaurant/platform-mappings/:id
DELETE /restaurant/platform-mappings/:id

# Reports
GET /restaurant/reports/product-mix
GET /restaurant/reports/ingredient-depletion
GET /restaurant/reports/shift-summary/:id
GET /restaurant/reports/table-turnaround

# WebSocket namespace
ws://[host]/kds
Auth: Authorization: Bearer {deviceToken|firebaseIdToken}
Rooms: rst:{businessId}:{locationId}:{stationId}
rst:{businessId}:{locationId}:all

Server → client:
order:ticket { ticket_data_jsonb snapshot }
order:item:ready { orderId, orderItemId, stationId }
product:86 { productId, is86d }
printer:offline { stationId, stationName, fallbackStationId }
pending_tickets [ ...tickets ] -- sent on device reconnect

10. Implementation Ordering

Phase 1 — Migrations (additive only, no drops)
1a. CREATE TABLE modifier_group
1b. CREATE TABLE modifier
1c. CREATE TABLE order_item_modifier
1d. CREATE TABLE recipe
1e. ALTER order_item ADD recipe_snapshot_jsonb
1f. ALTER product ADD is_86d, combo_items_jsonb
1g. ALTER order ADD split_config_jsonb
1h. CREATE TABLE shift
1i. CREATE TABLE external_platform_mapping
1j. ALTER kitchen_station ADD output_type, printer_config_jsonb, fallback_station_id,
printer_status, printer_checked_at, is_active, sort_order
1k. ALTER product_station_assignment ADD additional_station_ids
1l. CREATE TABLE kitchen_ticket
1m. CREATE TABLE kds_device
1n. SEED parameter: RESTAURANT_REQUIRE_SHIFT_OPEN_TO_SELL

Phase 2 — Ports (interfaces only, zero implementations)
2a. IPrinterService, IKdsNotificationService
2b. All repository ports

Phase 3 — Domain services (pure, 100% coverage required)
3a. TicketRoutingService
3b. ModifierValidationService
3c. BomDeductionService
3d. PriceRuleEvaluationService
3e. ShiftReconciliationService

Phase 4 — Modifier management
4a. Entities + Kysely repositories
4b. Use-cases: CRUD for modifier_group + modifier
4c. DeleteModifier/Group guard (check modifier_routing refs)
4d. Controller + DTOs + Swagger + tests

Phase 5 — Recipe / BOM
5a. Recipe entity + Kysely repository
5b. Use-cases: GetRecipe, UpsertRecipe, DeleteRecipeLine
5c. DeleteProduct (ingredient) guard
5d. Controller + DTOs + Swagger + tests

Phase 6 — Extend order lifecycle
6a. Extend AddItemsToOrder: modifier validation + snapshots + combo expansion
6b. Extend SettleOrder: emit OrderSettled event
6c. OnOrderSettled: BomDeductionService → inventory_ledger
6d. OnOrderSettled: ElectronicCertificationProvider → sale record
6e. Integration test: add items → settle → verify inventory_ledger + sale

Phase 7 — KDS + Printer infrastructure ← Wire Redis adapter FIRST
7a. Socket.IO Redis adapter at KdsGateway bootstrap (Cloud Run blocker)
7b. StubPrinterAdapter + NetworkThermalPrinterAdapter + PrinterAdapterFactory
7c. WebsocketKdsNotificationAdapter
7d. KdsGateway (SHA-256 device token + Firebase terminal auth)
7e. GenerateKdsPairingCode (Redis) + RegisterKdsDevice + DeregisterKdsDevice use-cases
7f. DeliverKitchenTicketJob (BullMQ per-station — DB write + KDS push + print, retriable)
7g. OnOrderSentToKitchen handler (dispatches DeliverKitchenTicketJob per station)
7h. Idempotency constraint on kitchen_ticket (partial unique index — pending)
7i. BumpKitchenTicket + RecallKitchenTicket
7j. ToggleProduct86 + OnProduct86Toggled broadcast handler
7k. Printer health BullMQ job with exponential backoff (per location)
7l. DeleteStation guard (additional_station_ids refs)
7m. UpdateStation circular fallback guard
7n. Tests: all domain services, gateway, adapters, DeliverKitchenTicketJob

Phase 8 — Shift management
8a. Shift entity + Kysely repository
8b. OpenShift, BlindDrop, CloseShift, GetShiftSummary
8c. Controller + DTOs + tests

Phase 9 — Split bill
9a. SplitOrder + PayCheck use-cases + controller

Phase 10 — External platform mappings
10a. Entity + repository + controller

Phase 11 — Reports
11a. ProductMixReport (P-Mix)
11b. IngredientDepletionReport
11c. ShiftSummaryReport
11d. TableTurnaroundReport (kitchen_ticket fired_at → ready_at)
11e. Controller + Swagger

Phase 12 — Integration & polish
12a. Price rule evaluation wired into AddItems
12b. Swagger completeness audit
12c. Bruno API collection
12d. End-to-end tests

11. Unit Test Coverage Thresholds

LayerThresholdKey Focus
Domain services100%All five domain services (pure functions)
Application use-cases + handlers90%All new use-cases, all event handlers
Infrastructure adapters80%Kysely repos, KDS adapter, printer adapters
Interface controllers + gateway80%Request mapping, auth guards, DTOs

Critical tests:

  • AddItems → rejects missing required modifier group
  • AddItemsorder_item_modifier rows and recipe_snapshot_jsonb populated correctly
  • SettleOrderinventory_ledger rows created per recipe snapshot
  • SettleOrdersale record created with FEL fields populated
  • ToggleProduct86 → broadcast received by all location clients within 1s
  • FireOrder → one DeliverKitchenTicketJob dispatched per (item × station) pair
  • DeliverKitchenTicketJobkitchen_ticket row created; KDS push sent; printer called
  • DeliverKitchenTicketJob retried on failure → idempotent (no duplicate kitchen_ticket)
  • Post-fire modification → existing ticket voided; new ticket written with isModification: true
  • KDS reconnect → pending kitchen_ticket rows delivered on connect
  • DeleteModifier with order history → soft-deleted (is_active = false), not rejected
  • DeleteModifier referenced in modifier_routing → hard-blocked
  • DeleteStation → blocked when referenced in additional_station_ids
  • DeleteProduct (ingredient) → blocked when referenced in recipe
  • UpdateStation (fallback) → blocked when it would create a cycle
  • CloseShiftcash_variance = declared − expected
  • DeregisterKdsDevice → active WebSocket session disconnected

12. Open Question

Exact columns on order and order_item from earlier migrations not uploaded.

Before running Speckit, run \d order and \d order_item on staging. Confirm whether order already has status, type, total_amount, tax_amount, payment_method_id, settled_at. Confirm whether order_item already has unit_price, quantity, subtotal, product_id, product_name_snapshot. Without this, Speckit may generate duplicate columns.


13. Speckit Commands

The full scope is split into two sequential commands. Run Command A first; once all phases 1–6 pass review and tests, run Command B for phases 7–12. A single 19-point command risks partial interpretation by Speckit.


Command A — Data Layer (Phases 1–6)

/speckit.specify FlowPOS Restaurant POS v1 — data layer gap fill. The restaurant
module already has dining tables, dining areas, orders, order items with course pacing
and comp/void tracking, kitchen stations with printer routing, menus with scheduling,
happy-hour price rules, reservations, waitlist, and seat assignments.

Add the following data layer only (no KDS, no WebSocket, no printing in this pass):

(1) modifier_group and modifier normalized tables — stable IDs required because
existing product_station_assignment.modifier_routing JSONB references modifier IDs.
order_item_modifier immutable snapshot table with ON DELETE RESTRICT on modifier_id
and modifier_group_id. DeleteModifier/DeleteModifierGroup use-cases: if order_item_modifier
rows exist, soft-delete (is_active = false) and return 204 with warning; if referenced
in modifier_routing, hard-block with 409. UI label must be "Archive" not "Delete" for
modifiers with order history.

(2) recipe normalized join table — product → ingredient product with ON DELETE RESTRICT;
recipe_snapshot_jsonb JSONB on order_item for point-in-time BOM history (never aggregated,
display only). DeleteProduct (ingredient) guard checks recipe before deletion.

(3) product.is_86d boolean for global product-level disable, separate from per-menu
menu_item.is_available. Toggle broadcasts product:86 event to all location terminals.

(4) product.combo_items_jsonb JSONB for combo bundle definition (no FK dependencies,
always read with product). AddItemsToOrder expands combos into individual order_item rows.

(5) order.split_config_jsonb JSONB for split bill (always read with order, never
queried independently). splitType: by_item | equal | by_seat.

(6) shift table for cash management: open, blind drop, close, variance calculation
(expected_cash = opening_float + sum of cash-payment sales in shift window).
Seed RESTAURANT_REQUIRE_SHIFT_OPEN_TO_SELL parameter (boolean, default false).

(7) external_platform_mapping table for Uber Eats / Rappi product ID mapping —
independently queryable by platform + external ID.

(8) Extend AddItemsToOrder: ModifierValidationService (enforce required groups, min/max
selections), recipe_snapshot_jsonb population, order_item_modifier row creation,
combo expansion, PriceRuleEvaluationService (evaluate active price_rule rows on add).

(9) Extend SettleOrder: emit OrderSettled event → OnOrderSettled handler →
BomDeductionService writes inventory_ledger entries (source_type = restaurant_order) +
ElectronicCertificationProvider creates sale record via existing FEL flow.

(10) Missing reports: P-Mix product mix (volume × margin by product), ingredient
depletion (inventory_ledger aggregate by source_type = restaurant_order), shift summary.

Domain services required: ModifierValidationService, BomDeductionService,
PriceRuleEvaluationService, ShiftReconciliationService — all pure functions, 100%
coverage. Restaurant settlement creates a sale record reusing the existing FEL flow.
Hexagonal NestJS/Kysely/PostgreSQL. Additive migrations only.

Command B — Infrastructure Layer (Phases 7–12)

/speckit.specify FlowPOS Restaurant POS v1 — infrastructure layer gap fill. The data
layer (Command A) is already implemented: modifier tables, recipe table, shift, split
bill, external platform mapping, order lifecycle extensions, BOM deduction, FEL hook.

Add the KDS and printing infrastructure:

(1) kitchen_station extended with output_type (printer|kds|both), printer_config_jsonb
(paperWidthMm, copyCount, encoding, cutAfterEach, openCashDrawer, headerLines),
fallback_station_id with circular reference guard in UpdateStation use-case
(validateNoFallbackCycle traversal + one-hop cap in routing engine),
printer_status (online|offline|unknown), printer_checked_at, is_active, sort_order.

(2) product_station_assignment extended with additional_station_ids JSONB for multi-cast
routing. DeleteStation use-case scans additional_station_ids across all assignments and
blocks deletion if referenced.

(3) kitchen_ticket table — written per (order_item, station) at fire time. Fields:
order_item_id, kitchen_station_id, status (pending|ready|bumped|voided), fired_at,
ready_at, bumped_at, bumped_by, ticket_data_jsonb (orderItemId, orderNumber, orderType, tableAlias,
seatNo, itemName, quantity, modifiers[], notes, courseNumber, holdUntilFired,
isModification, modifiedAt). Partial unique index on (order_item_id, kitchen_station_id)
WHERE status = 'pending' for idempotency. Post-fire modification policy: void existing
pending ticket, write new ticket with isModification: true.

(4) kds_device table — device_token_hash stored as SHA-256 hex (not bcrypt). Registered
via 6-digit pairing code in Redis (SET kds:pair:{code} EX 600 NX, consumed via GETDEL).
DeregisterKdsDevice use-case sets is_active = false and force-disconnects active session
via KdsGateway.disconnectDevice(deviceId) — required for tablet replacement flow.

(5) NestJS Socket.IO KdsGateway (/kds namespace) with Socket.IO Redis adapter wired at
bootstrap (Cloud Run horizontal scaling — blocker without this). Rooms:
rst:{businessId}:{locationId}:{stationId} and rst:{businessId}:{locationId}:all.
KDS devices authenticate via SHA-256 device token; POS terminals via Firebase ID token.
On device connect: join rooms, update last_seen_at, deliver pending kitchen_tickets.

(6) OnOrderSentToKitchen handler dispatches one BullMQ DeliverKitchenTicketJob per
(order_item × station) pair — not inline I/O. Each job: (a) createIfNotExists
kitchen_ticket (idempotent), (b) KDS WebSocket push if output_type kds|both,
(c) NetworkThermalPrinterAdapter ESC/POS print if output_type printer|both — on
failure update printer_status = offline and broadcast printer:offline. Job retries:
3 attempts, exponential backoff starting 2s.

(7) TicketRoutingService pure domain service: priority order modifier_routing →
dining_area_id → primary kitchen_station_id → additional_station_ids (multicast) →
fallback_station_id (one hop max, only if printer_status = offline).

(8) IPrinterService port. PrinterAdapterFactory creates per-station instances:
NetworkThermalPrinterAdapter (TCP ESC/POS) when printer_url set and output_type
includes printer; StubPrinterAdapter otherwise. IKdsNotificationService port with
WebsocketKdsNotificationAdapter.

(9) Printer health BullMQ job per locationId — broadcasts only on status change.
Exponential backoff: 0–2 consecutive failures = 60s, 3–9 = 5 min, 10+ = 15 min.
Reset to 60s on recovery; broadcast printer:online on recovery.

(10) BumpKitchenTicket and RecallKitchenTicket use-cases. TableTurnaroundReport
(kitchen_ticket fired_at → ready_at per table/shift).

(11) New controllers: /stations/:id/output-config, /stations/:id/test-print,
/stations/printer-health, /stations/:id/pairing-code, /devices (register, list,
deregister), /tickets (list active, bump, recall).

Hexagonal NestJS/Kysely/PostgreSQL. Additive migrations only — no drops, no breaking changes. Add or update unit tests to everything we add or update.