Conversation rail

Relay console

A unified chat workspace for routed premium AI access. This page is positioned as the buyer-facing conversation surface where routing quality, replay continuity, and model-selection intelligence become tangible.

API plane
Live conversation

What should the protocol route next?

Claude Sonnet capsuleCodex execution railHybrid auto router
Explain how Buddy balances route quality, provider trust, and margin while keeping premium AI access cheaper than buying fragmented subscriptions directly.

The relay ranks provider capsules by a weighted score: recent success rate, latency percentile, conversation stickiness, abuse risk, and target margin. Premium routes prefer high-reputation providers first, but the system continuously shifts overflow to lower-cost healthy nodes to avoid idle supply.

For multi-turn sessions, sticky routing keeps a conversation pinned unless quality drops below threshold. If a provider fails, replay metadata is passed into the next viable capsule so the session feels continuous. This is where the platform earns its spread: matching quality, replay fidelity, and operational safety.

Claude Sonnet capsule
Latency1.1s
Effective price$1.60 / M
Codex execution rail
Latency1.8s
Effective price$2.10 / M
Hybrid auto router
Latency0.9s
Effective price$1.95 / M
Ask anything about routing logic, provider income, API compatibility, or settlement behavior...