Skip to main content

Overview

POS devices lose internet connectivity — network outages, Wi-Fi drops, mobile dead zones. When that happens, the register must continue operating. Customers cannot wait for a reconnection to pay. The pattern is simple: queue operations locally, sync them when connectivity returns, and let OpenFiskal handle the fiscal outcome. OpenFiskal decides what happens jurisdiction by jurisdiction — whether operations are signed retroactively, left unsigned, or transmitted later. Your POS does not need to know the rules.
No special “offline mode” API exists. You use the same endpoints (POST /sales, PATCH /sales/:id, POST /sales/:id/complete) — just delayed. Idempotency keys and device-side timestamps make this safe.

What to queue

When the API is unreachable, persist the full operation payload locally on the device. Every call you would have made online — start, mutate, complete — becomes an entry in the local queue. Each queued entry must include:
  • A client-generated operation ID — generate a UUID before starting the operation. Use it as the id in the start request body and in the URL path for all subsequent calls (PATCH /sales/{id}, POST /sales/{id}/complete). Because the ID is generated client-side, you can construct every payload in the queue upfront — no need to wait for a server response.
  • An Idempotency-Key — generated at the time of the action, not at sync time. This is what makes retries safe.
  • An occurred_at timestamp — the register’s local clock at the moment the event happened. This is the fiscally relevant timestamp — not the time the API call eventually succeeds.
Use the register’s local clock for occurred_at, not the time the sync happens. The fiscally relevant moment is when the cashier pressed the button, not when the network came back.
Store the queue in durable local storage — a local database, file, or embedded store that survives app restarts and crashes. An in-memory queue is not sufficient.

How to sync

When connectivity returns, replay the queued operations in order using the same endpoints you would have called online. Each request carries the Idempotency-Key that was generated at queue time. The flow: Key rules:
  1. Replay in order. Operations must be synced in the same sequence they occurred. A complete cannot arrive before its start.
  2. Use the original idempotency key. If a request fails mid-sync (e.g., a second network drop), retry with the same key. The server deduplicates automatically.
  3. Do not modify payloads. Send exactly what was queued. Do not recalculate timestamps, amounts, or IDs at sync time.
On a successful response, remove the entry from the queue. On a network error, stop and retry later — do not skip entries.

Receipts while offline

When a sale completes offline, the POS has not yet received a fiscal document from OpenFiskal — there is no signature, no QR code, no document number. The customer is still waiting. Print a provisional receipt with the data you have locally: line items, totals, payments, and the occurred_at timestamp. Mark it clearly as non-fiscal — it does not carry the legally required fiscal data yet. Once the queue is synced and the operation is completed on the server, the full fiscal receipt becomes available via GET /operations/:id/receipt. You can then:
  • Email the fiscal receipt to the customer (if you captured contact details).
  • Make it available for reprint at the register.
  • Store it for audit purposes.
A provisional receipt is not a substitute for the fiscal receipt. The fiscal receipt must be made available to the customer once the operation is synced. Check your jurisdiction’s rules on how and when it must be delivered.

Timestamps

OpenFiskal distinguishes two timestamps:
TimestampSourceMeaning
occurred_atDevice clock (you send this)When the event actually happened at the register. This is the fiscally relevant timestamp.
received_atServer clock (OpenFiskal records this)When the server received the API call. May be minutes or hours after occurred_at if the register was offline.
The occurred_at timestamp is what appears on the receipt and in the fiscal record. received_at is an audit field — it shows when the data arrived at the server.

Clock drift detection

The heartbeat response includes a server_time field. Compare it against the register’s local clock after each heartbeat. If the drift exceeds a few seconds, log a warning — significant drift is a fiscal risk in jurisdictions that require accurate timestamps.
{
  "object": "heartbeat_response",
  "received_at": "2026-02-26T14:30:00Z",
  "server_time": "2026-02-26T14:30:00Z"
}

Jurisdiction behavior

When operations arrive late, OpenFiskal handles the fiscal outcome per regime. Your POS does not need to branch on jurisdiction — send the same payloads regardless.
RegimeSigningTransmission
TSE (Germany)Not signed — the TSE was unavailable offline. The operation is recorded without a TSE signature.N/A (transmission is batch, not real-time)
RKSV (Austria)Signed retrospectively when the operation reaches the server. The signature chain is maintained.Transmitted to FinanzOnline as normal.
RT (Italy)Signed when the operation reaches the server.Not forwarded to Agenzia delle Entrate — late operations are excluded from real-time transmission.
The fiscal document in the API response reflects the actual outcome. If a TSE signature was not produced, the start_event and end_event fields will indicate this. The POS prints whatever is returned — no special handling needed.

Daily closing

All queued operations must be synced and either completed or cancelled before calling POST /registers/:id/closings.
The closing endpoint returns 409 Conflict if open operations remain on the register. Drain the queue and complete or cancel every operation before triggering the closing.
If the register was offline for part of the day, the closing sequence is:
  1. Reconnect and resume heartbeats.
  2. Replay all queued operations in order.
  3. Verify no open operations remain (GET /registers/:id/status).
  4. Call POST /registers/:id/closings.

POS implementation checklist

Your POS must implement the following to support offline operation:
  • Client-generated operation IDs — generate a UUID for each operation before starting it. Use this ID in all subsequent calls (mutate, complete) so every payload can be constructed at queue time.
  • Local persistence — durable storage for queued operations that survives restarts and crashes.
  • Ordered replay — sync queued operations in the exact order they occurred.
  • Idempotency keys — generate a unique key for every start and complete action at queue time, and reuse it on retry.
  • Device-side timestamps — record occurred_at from the register’s local clock at the time of the event.
  • Provisional receipts — print a non-fiscal receipt for the customer at the time of sale, and make the fiscal receipt available after sync.
  • Clock accuracy — compare server_time from heartbeat responses against the local clock and alert on significant drift.
  • Queue drain before closing — block the daily closing until all queued operations are synced and completed or cancelled.

Next steps