Skip to main content

Capabilities

This page is the public-facing summary of what Transit ships.

All examples below invoke the transit CLI directly.

Capability Map

SlicePrimary commandsWhat it proves todayImportant boundary
local enginetransit proof local-engine, transit statusappend, replay, branch, merge, crash recovery, and log-state inspectionlocal proof roots are filesystem-backed
segment compressiontransit proof compression, transit proof tiered-enginezstd-compressed immutable segments with transparent replay through local, tiered, and hosted pathscompression applies to sealed segments only, not payloads or hosted envelopes
retention lifecycletransit proof retention, transit status, transit streams list, transit server lineageexplicit per-stream retention policy, retained-frontier status, and bounded replay after whole-segment agingretention is not Kafka-style compaction or selective erasure
hosted operator surfacetransit server run, transit streams, transit produce, transit consume, transit server append, transit proof hosted-authority, transit proof hosted-materializationshared-engine server semantics over a remote boundary, including the Rust client surface, single-stream batch append, hosted materialization checkpoints, explicit transport-timeout tuning, and concurrent producer/consumer handlingcurrent remote operator surface is still single-node
published authority modeltransit proof object-store-authority, transit proof tiered-engine, transit proof warm-cache-recoverymutable working-plane files plus immutable published segment and manifest objects discovered through frontiers/latest.jsonthe active head is not itself a published object-store snapshot
storage and recoverytransit proof tiered-engine, transit proof warm-cache-recovery, transit storage probetiered publication, cold restore, restart-after-cache-loss from remote authority, and effective-config verificationstorage probe verifies the current local guarantee plus hosted bootstrap posture, not remote-tier or cloud-provider durability
integrity and derived statetransit proof integrity, transit proof materialization, transit proof hosted-materialization, transit proof reference-projectionmanifest roots, checkpoints, hosted checkpoint verification, replay-anchored views, and resume correctnesssigned attestation and compact partial proofs are later layers
independent reader progressembedded cursor API on LocalEngine, hosted cursor API on TransitClient, transit proof hosted-materializationdurable per-consumer cursors with monotonic advance, hosted materialization checkpoints, local-durability ack, and restart recoveryhosted cursors track reader progress; they do not change authoritative stream truth or branch semantics
failover and distributed durabilitytransit proof controlled-failover, transit proof chaos-failoverlease handoff, automatic election, quorum acknowledgement, and former-primary fencingnot a multi-primary system

Shared Engine

Today Transit can:

  • append immutable records to a stream
  • replay ordered history
  • create explicit branches and merges with preserved lineage
  • report local log state with transit status --root <path>

The fastest embedded proof path is still:

transit proof local-engine --root target/transit-docs/local

For a concise inspection of the resulting log, use:

transit status --root target/transit-docs/local

Retention And Bounded Replay

Transit exposes retention as explicit per-stream lifecycle policy, not as hidden compaction:

  • no stream gets a retention policy unless you configure one
  • --retention-max-age-days and --retention-max-bytes age out whole rolled segments
  • the active segment is preserved
  • retained_start_offset tells you where replayable history now begins
  • replay becomes bounded once old history is trimmed away

The shortest proof for that slice is:

transit proof retention --root target/transit-docs/retention

That proof exercises:

  • retention-aware stream creation
  • hosted streams list output for retention_age and retention_bytes
  • retained-frontier status through transit server lineage and transit status
  • bounded replay after old rolled segments have been dropped

Retention is coarse-grained history aging. It is not Kafka-style latest-value compaction and it is not selective privacy erasure.

Immutable Segment Compression

Transit now makes its storage-level compression contract real:

  • sealed rolled segments default to zstd
  • the active head stays uncompressed
  • replay, restore, and hosted reads decode compressed segments transparently
  • operators can still opt out with compression = "none"

The shortest proof for that slice is:

transit proof compression --root target/transit-docs/compression

That proof shows:

  • a rolled local segment with explicit stored and uncompressed sizes
  • tiered publication and restore over compressed segment objects
  • hosted replay returning ordinary records through the same shared-engine decode path

This is immutable segment compression only. It does not compress individual record payloads, and it does not compress hosted request or response envelopes.

Server And Operator Surface

Transit exposes a kcat-style operator path on top of the hosted server boundary:

transit streams create --stream-id demo.root --actor docs --reason bootstrap
printf 'hello\nworld\n' | transit produce --stream-id demo.root
transit consume --stream-id demo.root --from-offset 0 --with-offsets
transit streams list
transit streams delete --stream-id demo.root --force

The lower-level transit server ... namespace still exists for protocol-shaped operations like branch, merge, lineage, tail-open, tail-poll, and tail-cancel.

It also exposes single-stream batch append directly:

transit server append \
--stream-id demo.root \
--payload-text first \
--payload-text second \
--payload-text third

That path sends one hosted append-batch request for one stream, returns one acknowledgement for the whole batch, and still leaves replay and tail behavior record-oriented.

Hosted transport tuning is explicit too:

transit server run --connection-io-timeout-ms 5000
transit proof hosted-authority \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000

Those knobs are runtime configuration only. They do not change record offsets, acknowledgement meaning, replay shape, or tail behavior.

The hosted daemon now also serves accepted connections concurrently, so an idle or slow socket no longer forces unrelated producer and consumer requests to wait in strict listener-loop order.

For Rust consumers, the upstream transit-client crate includes a generic replay-driven projection-consumer helper. That keeps derived reads on the hosted client boundary without inventing a projection-only server truth path or forcing downstream repos to publish a private wrapper.

The same hosted boundary now includes TransitClient::append_batch(...) for single-stream batching. It keeps the normal RemoteAcknowledged<_> envelope instead of inventing a second client-side batching dialect. The Rust client also exposes TransitClient::with_io_timeout(...) when a hosted workload needs more than the default 1s socket timeout. Hosted clients can now also stay on that boundary for materialization:

  • TransitClient::create_cursor(...), ack_cursor(...), and get_cursor(...) persist external consumer progress without app-owned offset files
  • TransitClient::materialize_checkpoint(...) stores opaque reducer state against a Transit-native lineage anchor
  • TransitClient::materialize_resume(...) replays only records after the checkpoint anchor and fails if the anchor no longer verifies

The shortest public proof for that slice is:

transit proof hosted-materialization \
--root target/transit-docs/hosted-materialization \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000

Server mode is single-node at the protocol surface. The daemon preserves shared-engine semantics instead of introducing a separate storage model, and the common remote commands default to [server].listen_addr when you configure it through transit.toml.

The current hosted implementation keeps batch append bounded with explicit record-count and total-byte limits. Empty or oversized batches fail through the normal hosted invalid_request path rather than silently splitting or repacking payloads.

Storage, Recovery, And Verification

Transit currently ships:

  • tiered publication and cold restore
  • a published authority model with immutable manifest snapshots and one mutable latest-frontier pointer
  • storage verification for the effective local guarantee and hosted bootstrap posture with transit storage probe
  • warm-cache recovery after explicit cache loss from the authoritative remote tier
  • hosted-authority proof coverage for thin-client producers and readers

Useful proof entry points:

transit proof object-store-authority --root target/transit-docs/object-store-authority
transit proof tiered-engine --root target/transit-docs/tiered
transit proof warm-cache-recovery --root target/transit-docs/warm-cache
transit --config target/transit-docs/storage-probe.toml storage probe

transit proof object-store-authority is the shortest proof for the storage authority model itself. It shows:

  • active.segment and state.json as the mutable working plane
  • published/streams/<id>/segments, manifests, and frontiers/latest.json as the local filesystem-backed published namespace
  • the same streams/<id>/segments|manifests|frontiers/latest.json shape under a remote object-store prefix
  • restore from the latest frontier pointer without appending to one mutable remote manifest object

transit storage probe is intentionally narrow. It validates the current runtime bootstrap posture and reports the explicit guarantee it can actually prove today. transit proof warm-cache-recovery is the complementary proof that discards the local cache and shows the hosted server can rebuild from the published remote frontier.

Integrity And Derived State

Transit also ships:

  • manifest-root and checkpoint verification
  • tamper-detection coverage through the integrity proof
  • incremental materialization with checkpoint resume
  • reference-projection proof coverage showing replay and resume converge on the same derived view

Useful proof entry points:

transit proof integrity --root target/transit-docs/integrity
transit proof materialization --root target/transit-docs/materialization
transit proof reference-projection --root target/transit-docs/reference-projection

For concrete examples — real segment digests, manifest roots, a merge manifest, checkpoint verification, and the exact output tamper detection produces — see Cryptographic Proofs.

Independent Reader Progress

Transit ships a first-class cursor primitive on the embedded engine:

  • durable per-consumer cursor records with id, bound stream, position, and lineage metadata
  • create, lookup, list, advance, ack, and delete operations on LocalEngine
  • monotonic advance clamped to the committed stream frontier, with explicit errors for backward moves and frontier overruns
  • local-durability acknowledgement matching the language Transit already uses for appends
  • restart and warm-cache recovery of cursor state alongside stream manifests

Branches remain for lineage forks. Cursors carry per-reader progress so independent consumers can advance on the same stream without maintaining an external offset store or misusing branches.

The same semantics are now available across the hosted boundary through TransitClient and the hosted materialization proof:

  • hosted cursor create, lookup, ack, and delete operations
  • hosted checkpoints bound to source-stream lineage instead of raw app offsets
  • hosted incremental resume that replays only post-anchor records

That hosted surface is still client-first. It does not require downstream apps to open LocalEngine or mirror remote streams just to checkpoint reducer progress.

Failover And Distributed Durability

The shared engine supports:

  • controlled failover with explicit lease handoff and former-primary fencing
  • automatic failover through ElectionMonitor
  • quorum durability with ClusterMembership-based majority acknowledgement

Useful proof entry points:

transit proof controlled-failover --root target/transit-docs/controlled-failover
transit proof chaos-failover --root target/transit-docs/chaos-failover

Boundaries

Transit does not claim:

  • multi-primary or concurrent multi-writer behavior
  • hidden remote-tier safety behind a local acknowledgement
  • automatic remote object-store reclamation when deleting a published stream
  • stable pre-1.0 on-disk or wire compatibility

Those non-claims are part of the product contract, not temporary wording.

Next: