Capabilities
This page is the public-facing summary of what Transit ships.
All examples below invoke the transit CLI directly.
Capability Map
| Slice | Primary commands | What it proves today | Important boundary |
|---|---|---|---|
| local engine | transit proof local-engine, transit status | append, replay, branch, merge, crash recovery, and log-state inspection | local proof roots are filesystem-backed |
| segment compression | transit proof compression, transit proof tiered-engine | zstd-compressed immutable segments with transparent replay through local, tiered, and hosted paths | compression applies to sealed segments only, not payloads or hosted envelopes |
| retention lifecycle | transit proof retention, transit status, transit streams list, transit server lineage | explicit per-stream retention policy, retained-frontier status, and bounded replay after whole-segment aging | retention is not Kafka-style compaction or selective erasure |
| hosted operator surface | transit server run, transit streams, transit produce, transit consume, transit server append, transit proof hosted-authority, transit proof hosted-materialization | shared-engine server semantics over a remote boundary, including the Rust client surface, single-stream batch append, hosted materialization checkpoints, explicit transport-timeout tuning, and concurrent producer/consumer handling | current remote operator surface is still single-node |
| published authority model | transit proof object-store-authority, transit proof tiered-engine, transit proof warm-cache-recovery | mutable working-plane files plus immutable published segment and manifest objects discovered through frontiers/latest.json | the active head is not itself a published object-store snapshot |
| storage and recovery | transit proof tiered-engine, transit proof warm-cache-recovery, transit storage probe | tiered publication, cold restore, restart-after-cache-loss from remote authority, and effective-config verification | storage probe verifies the current local guarantee plus hosted bootstrap posture, not remote-tier or cloud-provider durability |
| integrity and derived state | transit proof integrity, transit proof materialization, transit proof hosted-materialization, transit proof reference-projection | manifest roots, checkpoints, hosted checkpoint verification, replay-anchored views, and resume correctness | signed attestation and compact partial proofs are later layers |
| independent reader progress | embedded cursor API on LocalEngine, hosted cursor API on TransitClient, transit proof hosted-materialization | durable per-consumer cursors with monotonic advance, hosted materialization checkpoints, local-durability ack, and restart recovery | hosted cursors track reader progress; they do not change authoritative stream truth or branch semantics |
| failover and distributed durability | transit proof controlled-failover, transit proof chaos-failover | lease handoff, automatic election, quorum acknowledgement, and former-primary fencing | not a multi-primary system |
Shared Engine
Today Transit can:
- append immutable records to a stream
- replay ordered history
- create explicit branches and merges with preserved lineage
- report local log state with
transit status --root <path>
The fastest embedded proof path is still:
transit proof local-engine --root target/transit-docs/local
For a concise inspection of the resulting log, use:
transit status --root target/transit-docs/local
Retention And Bounded Replay
Transit exposes retention as explicit per-stream lifecycle policy, not as hidden compaction:
- no stream gets a retention policy unless you configure one
--retention-max-age-daysand--retention-max-bytesage out whole rolled segments- the active segment is preserved
retained_start_offsettells you where replayable history now begins- replay becomes bounded once old history is trimmed away
The shortest proof for that slice is:
transit proof retention --root target/transit-docs/retention
That proof exercises:
- retention-aware stream creation
- hosted
streams listoutput forretention_ageandretention_bytes - retained-frontier status through
transit server lineageandtransit status - bounded replay after old rolled segments have been dropped
Retention is coarse-grained history aging. It is not Kafka-style latest-value compaction and it is not selective privacy erasure.
Immutable Segment Compression
Transit now makes its storage-level compression contract real:
- sealed rolled segments default to
zstd - the active head stays uncompressed
- replay, restore, and hosted reads decode compressed segments transparently
- operators can still opt out with
compression = "none"
The shortest proof for that slice is:
transit proof compression --root target/transit-docs/compression
That proof shows:
- a rolled local segment with explicit stored and uncompressed sizes
- tiered publication and restore over compressed segment objects
- hosted replay returning ordinary records through the same shared-engine decode path
This is immutable segment compression only. It does not compress individual record payloads, and it does not compress hosted request or response envelopes.
Server And Operator Surface
Transit exposes a kcat-style operator path on top of the hosted server boundary:
transit streams create --stream-id demo.root --actor docs --reason bootstrap
printf 'hello\nworld\n' | transit produce --stream-id demo.root
transit consume --stream-id demo.root --from-offset 0 --with-offsets
transit streams list
transit streams delete --stream-id demo.root --force
The lower-level transit server ... namespace still exists for protocol-shaped operations like branch, merge, lineage, tail-open, tail-poll, and tail-cancel.
It also exposes single-stream batch append directly:
transit server append \
--stream-id demo.root \
--payload-text first \
--payload-text second \
--payload-text third
That path sends one hosted append-batch request for one stream, returns one acknowledgement for the whole batch, and still leaves replay and tail behavior record-oriented.
Hosted transport tuning is explicit too:
transit server run --connection-io-timeout-ms 5000
transit proof hosted-authority \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000
Those knobs are runtime configuration only. They do not change record offsets, acknowledgement meaning, replay shape, or tail behavior.
The hosted daemon now also serves accepted connections concurrently, so an idle or slow socket no longer forces unrelated producer and consumer requests to wait in strict listener-loop order.
For Rust consumers, the upstream transit-client crate includes a
generic replay-driven projection-consumer helper. That keeps derived reads on
the hosted client boundary without inventing a projection-only server truth
path or forcing downstream repos to publish a private wrapper.
The same hosted boundary now includes TransitClient::append_batch(...) for
single-stream batching. It keeps the normal RemoteAcknowledged<_> envelope
instead of inventing a second client-side batching dialect.
The Rust client also exposes TransitClient::with_io_timeout(...) when a
hosted workload needs more than the default 1s socket timeout.
Hosted clients can now also stay on that boundary for materialization:
TransitClient::create_cursor(...),ack_cursor(...), andget_cursor(...)persist external consumer progress without app-owned offset filesTransitClient::materialize_checkpoint(...)stores opaque reducer state against a Transit-native lineage anchorTransitClient::materialize_resume(...)replays only records after the checkpoint anchor and fails if the anchor no longer verifies
The shortest public proof for that slice is:
transit proof hosted-materialization \
--root target/transit-docs/hosted-materialization \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000
Server mode is single-node at the protocol surface. The daemon preserves shared-engine semantics instead of introducing a separate storage model, and the common remote commands default to [server].listen_addr when you configure it through transit.toml.
The current hosted implementation keeps batch append bounded with explicit
record-count and total-byte limits. Empty or oversized batches fail through the
normal hosted invalid_request path rather than silently splitting or
repacking payloads.
Storage, Recovery, And Verification
Transit currently ships:
- tiered publication and cold restore
- a published authority model with immutable manifest snapshots and one mutable latest-frontier pointer
- storage verification for the effective local guarantee and hosted bootstrap posture with
transit storage probe - warm-cache recovery after explicit cache loss from the authoritative remote tier
- hosted-authority proof coverage for thin-client producers and readers
Useful proof entry points:
transit proof object-store-authority --root target/transit-docs/object-store-authority
transit proof tiered-engine --root target/transit-docs/tiered
transit proof warm-cache-recovery --root target/transit-docs/warm-cache
transit --config target/transit-docs/storage-probe.toml storage probe
transit proof object-store-authority is the shortest proof for the storage
authority model itself. It shows:
active.segmentandstate.jsonas the mutable working planepublished/streams/<id>/segments,manifests, andfrontiers/latest.jsonas the local filesystem-backed published namespace- the same
streams/<id>/segments|manifests|frontiers/latest.jsonshape under a remote object-store prefix - restore from the latest frontier pointer without appending to one mutable remote manifest object
transit storage probe is intentionally narrow. It validates the current runtime bootstrap posture and reports the explicit guarantee it can actually prove today. transit proof warm-cache-recovery is the complementary proof that discards the local cache and shows the hosted server can rebuild from the published remote frontier.
Integrity And Derived State
Transit also ships:
- manifest-root and checkpoint verification
- tamper-detection coverage through the integrity proof
- incremental materialization with checkpoint resume
- reference-projection proof coverage showing replay and resume converge on the same derived view
Useful proof entry points:
transit proof integrity --root target/transit-docs/integrity
transit proof materialization --root target/transit-docs/materialization
transit proof reference-projection --root target/transit-docs/reference-projection
For concrete examples — real segment digests, manifest roots, a merge manifest, checkpoint verification, and the exact output tamper detection produces — see Cryptographic Proofs.
Independent Reader Progress
Transit ships a first-class cursor primitive on the embedded engine:
- durable per-consumer cursor records with id, bound stream, position, and lineage metadata
- create, lookup, list, advance, ack, and delete operations on
LocalEngine - monotonic advance clamped to the committed stream frontier, with explicit errors for backward moves and frontier overruns
- local-durability acknowledgement matching the language Transit already uses for appends
- restart and warm-cache recovery of cursor state alongside stream manifests
Branches remain for lineage forks. Cursors carry per-reader progress so independent consumers can advance on the same stream without maintaining an external offset store or misusing branches.
The same semantics are now available across the hosted boundary through
TransitClient and the hosted materialization proof:
- hosted cursor create, lookup, ack, and delete operations
- hosted checkpoints bound to source-stream lineage instead of raw app offsets
- hosted incremental resume that replays only post-anchor records
That hosted surface is still client-first. It does not require downstream apps
to open LocalEngine or mirror remote streams just to checkpoint reducer
progress.
Failover And Distributed Durability
The shared engine supports:
- controlled failover with explicit lease handoff and former-primary fencing
- automatic failover through
ElectionMonitor - quorum durability with
ClusterMembership-based majority acknowledgement
Useful proof entry points:
transit proof controlled-failover --root target/transit-docs/controlled-failover
transit proof chaos-failover --root target/transit-docs/chaos-failover
Boundaries
Transit does not claim:
- multi-primary or concurrent multi-writer behavior
- hidden remote-tier safety behind a
localacknowledgement - automatic remote object-store reclamation when deleting a published stream
- stable pre-
1.0on-disk or wire compatibility
Those non-claims are part of the product contract, not temporary wording.
Next: