MeshSky Docs

History

Per-aircraft track history on MeshSky — in-memory ring buffers, durable SQLite storage, and multi-gateway fan-in.

Every AirBridge gateway keeps a per-aircraft history of the positions its feeders have observed. The history layer is two-tier — a fast in-memory ring buffer for “recent breadcrumbs”, plus a durable SQLite store for “everything since timestamp T”. A controltower talking to multiple gateways unions all of their slices into one answer, so you get the full picture even though no single gateway sees the whole mesh.

What’s stored

For every accepted aircraft update, AirBridge writes one row containing:

  • icao — the 24-bit ICAO hex address.
  • ts — observation timestamp (ms epoch).
  • lat, lon, alt_baro, alt_geom — position.
  • ground_speed, track, vertical_rate — kinematics.
  • squawk, flight, on_ground — identity & status.
  • position_source"adsb" or "mlat".
  • source — which ingest path produced the record.
  • source_node_id — the AirBridge that observed it.

Updates without a usable position and without kinematic data are dropped on the way in — we never store empty rows.

In-memory ring (TrackRing)

In front of the SQLite store sits a per-ICAO ring buffer:

  • Default 200 points per aircraft.
  • Default 30 minutes maximum age.
  • Lazily evicted on read and on every append.

This serves the cheap “give me the last N fixes” query without touching disk.

Durable store (SQLite + WAL)

The persistent layer is better-sqlite3 with:

  • journal_mode = WAL — writers don’t block readers.
  • synchronous = NORMAL — durable enough for telemetry, ~10× faster than FULL.
  • busy_timeout = 5000 — forgive transient lock contention.
  • Composite primary key (icao, ts) — natural dedupe on simultaneous writes.

A background pruner runs hourly and trims rows older than keepDays (default 7).

REST API

Every AirBridge exposes:

GET /global/aircraft/<hex>/track

Returns the in-memory ring for one aircraft (oldest-first). Cheap, served from RAM:

{
  "generatedAt": 1730000000000,
  "hex": "a1b2c3",
  "count": 142,
  "points": [
    { "ts": 1729999970000, "lat": 47.62, "lon": -122.35, "altBaro": 33000,
      "groundSpeed": 462, "track": 285, "positionSource": "adsb",
      "source": "feeder", "sourceNodeId": "ab-pnw-1" }
  ]
}
GET /global/aircraft/<hex>/history?sinceMs=…&untilMs=…&limit=…

Returns durable history from SQLite for one aircraft within a time range. limit defaults to 5,000 and is capped at 50,000 rows.

Both endpoints require an authenticated session bearer (see Protocol).

Multi-gateway fan-in

A single controltower may peer with several AirBridges (each sovereign, each holding only the tracks its own feeders fed it). The ControlTowerHistory helper takes an array of AirBridgeClient instances and unions their answers:

  • Issues every per-gateway query in parallel via Promise.allSettled.
  • A dead gateway never blocks the others.
  • Failures surface in partial: true and the per-gateway sources[] array rather than silently disappearing.
  • Points are deduplicated by (hex, ts) and sorted oldest-first.
  • The merged response is re-clamped to limit rows.
import { ControlTowerHistory } from 'meshsky-controltower';

const history = new ControlTowerHistory({ clients });
const track = await history.getAircraftTrack('a1b2c3');
const since = await history.getAircraftHistory('a1b2c3', {
  sinceMs: Date.now() - 24 * 60 * 60 * 1000,
  limit: 10_000
});

Configuration knobs

AirBridge reads these from its environment:

VariableDefaultPurpose
HISTORY_ENABLEDtrueMaster switch.
HISTORY_DB_PATH./data/history.sqlite3SQLite file (use :memory: for tests).
HISTORY_KEEP_DAYS7Pruner retention.
HISTORY_PRUNE_INTERVAL_MS3_600_000Pruner cadence.
HISTORY_TRACK_MAX_POINTS200In-memory ring depth per ICAO.
HISTORY_TRACK_MAX_AGE_MS1_800_000In-memory ring max age per ICAO.
HISTORY_MIN_INTERVAL_MS0Optional throttle between writes.

Why two tiers

The ring is fast, bounded, and survives only as long as the gateway process. The SQLite store is durable, deeper, and survives restarts. Most clients only ever need the ring; analytics and replay use the durable store. The split keeps hot-path queries from waiting on disk and keeps deep history available when you want to ask for it.