Server First Run
Choose this track when you want to evaluate Transit as a daemon that exposes the shared engine over a network boundary.
What Server Mode Is
Server mode is not a separate database product. It is the same engine wrapped in a daemon and remote protocol surface.
That means the point of this guide is not only "can it listen on a socket?" It is "does it preserve the same append, lineage, and durability semantics across the boundary?"
Fastest Proof Paths
Run the end-to-end server proof:
transit proof networked-server \
--root target/transit-docs/networked \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000
Run the current controlled failover proof:
transit proof controlled-failover --root target/transit-docs/controlled-failover
Exercise the hosted thin-client producer and reader path through the CLI:
transit proof hosted-authority \
--root target/transit-docs/hosted-authority \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000
That proof covers the thin hosted operation surface end to end against a locally started server. It is also the public proof path for hosted timeout tuning when you want to exercise a producer/consumer workload with more than the default 1s transport budget. Those flags change transport tolerance, not record semantics. The timeout flags are operational tuning only; they do not change append, replay, or tail semantics.
Exercise hosted client-only materialization against the same server boundary:
transit proof hosted-materialization \
--root target/transit-docs/hosted-materialization \
--server-connection-io-timeout-ms 5000 \
--client-io-timeout-ms 5000
That proof shows the client-first materialization path:
- hosted cursor progress without app-owned offset files
- hosted checkpoint persistence bound to source-stream lineage
- hosted resume that replays only records after the checkpoint anchor
- tampered checkpoint rejection through the normal hosted error surface
Run the retention proof when you want one supported path that demonstrates retention-aware create/list/status behavior and bounded replay:
transit proof retention --root target/transit-docs/retention
That proof creates a stream with explicit retention, trims old rolled segments,
and then shows the retained frontier through both transit server lineage and
transit status.
Run the compression proof when you want one supported path that demonstrates
default zstd storage for sealed rolled segments, tiered restore, and hosted
replay over compressed history:
transit proof compression --root target/transit-docs/compression
That proof is a storage proof, not a transport proof. It shows that sealed segments may be stored compressed while replay and hosted reads still return normal Transit records.
Run the object-store authority proof when you want one supported path that
demonstrates the mutable working plane, immutable manifest snapshots, and
frontiers/latest.json discovery pointer across both filesystem-backed and
remote object-store namespaces:
transit proof object-store-authority --root target/transit-docs/object-store-authority
That proof shows the split explicitly:
active.segmentandstate.jsonremain mutable working files- published segment objects and manifest snapshots are immutable once written
frontiers/latest.jsonis the latest-discovery pointer for published state- the same
streams/<id>/segments|manifests|frontiers/latest.jsonshape exists locally and in the remote object-store namespace
For the broader shipped surface, read Capabilities.
If you set [server].listen_addr in transit.toml, the streams, produce,
and consume commands will default to that configured server endpoint.
If you are evaluating Transit as a downstream cutover target, the canonical
hosted runtime surface is transit-server, and the canonical Rust client
surface is transit-client. The cutover posture is to delete private hosted
adapter layers after the upstream proof path is adopted, not keep a permanent
compatibility shim.
Minimal Manual Walkthrough
Start a daemon:
transit server run \
--root target/transit-docs/server \
--connection-io-timeout-ms 5000
In another terminal, create a stream:
transit streams create \
--stream-id demo/main \
--actor docs \
--reason "manual walkthrough"
Produce records:
printf 'hello from transit\nsecond record\n' | transit produce \
--stream-id demo/main
Send one hosted batch append request to one stream:
transit server append \
--stream-id demo/main \
--payload-text "third record" \
--payload-text "fourth record"
That lower-level path is useful when you want one explicit acknowledgement for multiple records on one stream without packing those records into one opaque payload.
Consume the replay:
transit consume \
--stream-id demo/main \
--from-offset 0 \
--with-offsets
List hosted streams:
transit streams list
Inspect the local log state behind that daemon:
transit status --root target/transit-docs/server
Inspect lineage through the lower-level server namespace:
transit server lineage --stream-id demo/main
If you are evaluating the Rust client boundary instead of the CLI, the matching
hosted calls are TransitClient::append_batch(...) and
TransitClient::with_io_timeout(...).
For hosted client-only materializers, the matching calls are
TransitClient::create_cursor(...), materialize_checkpoint(...), and
materialize_resume(...).
If you want per-stream retention on a hosted stream, configure it explicitly at create time:
transit streams create \
--stream-id demo/main \
--actor docs \
--reason "retention walkthrough" \
--retention-max-age-days 30 \
--retention-max-bytes 10737418240
After old rolled history is trimmed, transit server lineage and
transit status surface retained_start_offset so you can see where replay now
begins. That is bounded replay through history aging, not compaction.
If you are stress-testing a producer and consumer against the same hosted server, the daemon now serves accepted connections concurrently instead of serializing every socket in the listener loop. That makes timeout tuning an operational control, not a semantic workaround. The same rule applies to compression: sealed segments may be stored compressed, but hosted reads still observe normal Transit records.
What To Look For
You should see that the server surface:
- keeps acknowledgements explicit
- keeps branch and lineage operations available remotely
- exposes a higher-level
streams/produce/consumepath for common operator work - exposes a lower-level single-stream batch append path that still replays as individual records
- exposes hosted cursor and materialization primitives for external-daemon consumers that need checkpointed replay progress
- lets operators raise hosted connection I/O timeouts explicitly for proof or runtime workloads
- keeps overlapping producer and consumer traffic on the same hosted protocol surface
- can store rolled history as compressed immutable segments without changing hosted replay behavior
- keeps the mutable local working plane separate from immutable published object-store snapshots
- stays single-node unless a feature explicitly says otherwise
- does not invent a separate storage model for the daemon
You should also expect explicit failures for empty or oversized hosted batches.
The current server contract rejects them with the normal hosted
invalid_request error surface.
And if retention is configured, you should expect replay to become bounded over
time. Once the retained frontier advances, operators need to reason from
retained_start_offset instead of assuming offset 0 is still available.
And separately, the shared engine ships quorum durability and automatic leader election. The minimal walkthrough here stays focused on the single-node remote operator surface; use the failover proofs when you want the distributed slice itself.
Boundaries To Remember
Current server mode is a shared-engine single-node server.
That means these docs should not imply:
- multi-primary behavior
- hidden consensus across every deployment
- a second replicated storage layer behind the API
- automatic remote object reclamation after deleting a published stream
Where To Go Next
- Read Embedded And Server Modes
- Read Durability Modes
- Read Failover
- Read Capabilities
- Read Foundations