- What is a nonce?
- Common challenges
- Strategy
- Best practices
- Arbitrum (Nitro) specifics
- Example (TypeScript script): Gated nonces on Arbitrum Nitro
- Source of truth: latest vs pending
- Fetching the nonce (viem)
- Local tracking and persistence
- Concurrency control (avoid races)
- Handling errors and re-sync
- Replacement transactions (EIP-1559)
- When to rely on node-managed nonces
- Client implementation (React + viem)
- Install
- Hook: useNonceManager
- Component: submit a transaction on Arbitrum
- Summary
Managing Nonces
This guide outlines strategies to manage transaction nonces safely on Arbitrum (Nitro), with a focus on ordering, concurrency, and recovery. It includes minimal viem examples and applies to both frontend and backend systems.
What is a nonce?
- In EVM-compatible blockchains, a nonce is the count of transactions sent from an address.
- Each new transaction must use a unique, sequential nonce:
- The first transaction uses nonce 0.
- The next uses nonce 1, and so on.
- If a transaction uses a nonce that’s already been used or skipped, the blockchain will reject it.
Common challenges
- Race conditions: Sending multiple transactions simultaneously may reuse the same nonce.
- RPC delays: Fetching nonces from a slow RPC endpoint may lead to stale values.
- Page reloads: If the app restarts, in-memory nonce tracking resets.
To avoid these, implement a clear source of truth, local tracking, and safe concurrency controls.
Strategy
We’ll use a two-layer nonce management system:
- Remote layer: Fetches the latest confirmed nonce from the blockchain via RPC.
- Local layer: Tracks and increments nonces in localStorage or in-memory state for pending transactions.
Flow:
Best practices
Arbitrum (Nitro) specifics
- Nonce mismatches on Arbitrum commonly happen when the client optimistically bumps the nonce before the prior transaction is confirmed as included.
- When a transaction is submitted with a nonce that is too high on Nitro, the node will hold it briefly (about 1 second by default) to see if intermediate transactions arrive to fill the gap. If they don’t, you’ll receive a “nonce too high” error afterwards.
- That 1-second hold can delay feedback and encourage clients to keep submitting higher nonces. Until client-side logic is improved, reduce this “gap-hold” time from ~1s to ~0 to get immediate feedback. This allows you to quickly detect incorrect nonces before sending further transactions with even higher nonces.
- Practical approach:
- Gate sending of nonce N+1 on evidence that N is included (or at least that N is accepted and visible in
pending). - If you receive “nonce too high”, immediately re-sync from RPC and pause higher-nonce submissions until the gap is resolved.
- Ask your node/provider to minimize the hold window so errors surface immediately (Conduit can configure this on managed stacks).
- Gate sending of nonce N+1 on evidence that N is included (or at least that N is accepted and visible in
Example (TypeScript script): Gated nonces on Arbitrum Nitro
Source of truth: latest vs pending
- Use
latestto align with confirmed state (stable, conservative). - Use
pendingto include mempool (higher throughput, risk of drift). - For parallel send flows, prefer
pendingplus a local tracker to avoid reuse.
Fetching the nonce (viem)
Local tracking and persistence
Maintain a next-nonce per address locally
Recommended algorithm (per address):
- On startup, read
pendingnonce from RPC as the baseline. - Compare with any locally cached value and keep the max.
- On each send:
- Read current local next-nonce.
- Reserve it (optimistically).
- Submit tx with that nonce.
- Increment and persist the next-nonce.
- On error indicating nonce drift, re-sync from RPC and update local store.
Minimal usage pattern:
Concurrency control (avoid races)
- Use a per-address mutex/lock (backend) or single-flight queue (frontend).
- Only one “allocate nonce and send” critical section should run at a time.
Handling errors and re-sync
- “nonce too low”: Your local tracker is behind; re-fetch
pending, set local to max, retry. - “replacement transaction underpriced”: If intentionally replacing, bump fees sufficiently; otherwise choose a new higher nonce.
- Dropped transactions: If a tx is dropped, you may reuse its nonce after confirming it’s not in mempool; safer is to send a replacement with higher fee.
- Periodically re-sync from RPC (timer or after N sends).
Replacement transactions (EIP-1559)
- To replace a pending tx, send a new tx with the same nonce and a higher effective priority fee/tip.
- Ensure the replacement increases fees enough to be accepted by the node/p2p policy.
When to rely on node-managed nonces
- It’s fine to omit
nonceand let the node set it if:- You send strictly sequential transactions (no parallelism).
- Throughput and latency requirements are modest.
- If you need parallelism or precise control, manage nonces locally.
Client implementation (React + viem)
Install
Hook: useNonceManager
Component: submit a transaction on Arbitrum
Arbitrum tip: gate sending of nonce N+1 until N is visible in pending (or confirmed), and avoid incrementing local state until you actually send. If you receive “nonce too high”, immediately re-sync from RPC and pause higher-nonce submissions; ask your provider to minimize the Nitro hold window so errors surface immediately.
Summary
By combining local caching and on-chain syncing, your dApp can reliably manage nonces even in complex transaction flows.