Introduction
aitk is a Rust crate that provides cross-platform, framework-agnostic abstractions for working with AI models. It includes core types, traits, built-in API clients, and optional state management utilities -- everything you need to integrate AI capabilities into any Rust application.
Features
- Cross-platform: works on native desktop (macOS, Windows, Linux), mobile (Android, iOS),
and WebAssembly (
wasm32-unknown-unknown-- no Emscripten, no WASI required). - Built-in API clients: streaming chat completions, image generation, speech-to-text, realtime audio, and more to come.
- Unified message format:
MessageContentcan represent text, images, audio, tool calls, and other modalities in a single structure -- much like a traditional chat app that naturally handles mixed content. Every client speaks this same format through theBotClienttrait, so switching or composing clients requires no changes to your application logic. - Composable: the
RouterClientlets you aggregate multiple clients under one interface, transparently routing requests to the right one. - MCP support: discover and invoke tools from Model Context Protocol servers.
- Flexible integration: use the clients directly in a CLI tool, a GUI app, a web server, or anything else. There is no framework lock-in.
- Optional state management: if you are building a chat application, the
ChatControllerprovides business logic, streaming, model loading, and a plugin system -- without coupling to any UI framework. - Async & streaming: built on standard Rust async patterns with
futuresstreams, compatible with any async runtime.
Feature Flags
aitk uses feature flags to let you include only what you need:
| Flag | Description |
|---|---|
api-clients | Enables the built-in HTTP clients (OpenAiClient, OpenAiImageClient, OpenAiSttClient, etc.). Pulls in reqwest. |
realtime-clients | Enables WebSocket-based clients (OpenAiRealtimeClient). Pulls in tokio and tokio-tungstenite. |
async-rt | Includes tokio (native) and wasm-bindgen-futures (WASM), exposing a unified spawn() function. |
mcp | Enables MCP tool integration. Implies async-rt and api-clients. |
full | Enables everything above. |
With no features enabled, you get the core types and traits (BotClient, Message,
MessageContent, etc.) with zero heavy dependencies. This is useful if you want to implement
your own client without pulling in reqwest or tokio.
Crate Organization
aitk
├── protocol Core types and traits (BotClient, Message, Bot, Tool, ...)
├── clients Built-in BotClient implementations
├── controllers State management (ChatController)
├── mcp Model Context Protocol integration (feature-gated)
└── utils Cross-platform async primitives, helpers
A prelude module re-exports the most commonly used types:
use aitk::prelude::*;
Quickstart
This guide will get you from zero to a working AI call in a few minutes.
Installation
aitk is not yet published on crates.io. Add it as a Git dependency:
[dependencies]
aitk = { git = "https://github.com/moly-ai/aitk.git", features = ["api-clients"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
futures = "0.3"
The api-clients feature enables the built-in HTTP clients.
Hello World
This example sends a message to an OpenAI-compatible API and prints the streamed response:
use aitk::prelude::*;
use futures::StreamExt;
#[tokio::main]
async fn main() {
// Create and configure a client.
let mut client = OpenAiClient::new("https://api.openai.com/v1".into());
client.set_key("your-api-key").unwrap();
// Build the request.
let bot_id = BotId::new("gpt-4.1-nano");
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
text: "Hello! What is Rust?".into(),
..Default::default()
},
..Default::default()
}];
// Send and stream the response.
let mut stream = client.send(&bot_id, &messages, &[]);
let mut last_content = MessageContent::default();
while let Some(result) = stream.next().await {
if let Some(content) = result.into_value() {
last_content = content;
}
}
println!("{}", last_content.text);
}
What is happening here?
-
OpenAiClient::new(url)creates a client pointing at any OpenAI-compatible endpoint. This works with OpenAI, Azure, Ollama, LM Studio, or any service that speaks the same protocol. -
client.set_key(key)sets the API key sent as a Bearer token. -
BotId::new("gpt-4.1-nano")identifies the model. This is the model ID string your endpoint expects. -
client.send(&bot_id, &messages, &[])returns aStreamofClientResult<MessageContent>items. Each item is a cumulative snapshot of the full response built so far. The third argument is the list of tools (empty here). -
result.into_value()extracts theMessageContentfrom the result.
Next Steps
- Learn about the built-in clients in detail.
- See how to generate images or transcribe audio.
- Compose multiple clients with the Router Client.
- Build a full chat application with the ChatController.
Chat Completions
OpenAiClient is the primary client for streaming text completions from any
OpenAI-compatible API. It handles SSE streaming, attachments (images, PDFs, text files),
reasoning extraction, and tool calls.
Feature flag: api-clients
Setup
use aitk::prelude::*;
let mut client = OpenAiClient::new("https://api.openai.com/v1".into());
client.set_key("your-api-key").unwrap();
The URL can point to any OpenAI-compatible endpoint: OpenAI, Azure, Ollama, LM Studio, OpenRouter, and others.
Custom headers
Some services require additional headers. Use set_header for these cases:
client.set_header("x-custom-header", "value").unwrap();
Sending a message
The core interface is send(), which returns a stream of ClientResult<MessageContent> items.
Each item is a cumulative snapshot of the response built so far:
use futures::StreamExt;
let bot_id = BotId::new("gpt-4.1-nano");
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
text: "Explain ownership in Rust.".into(),
..Default::default()
},
..Default::default()
}];
let mut stream = client.send(&bot_id, &messages, &[]);
let mut last_content = MessageContent::default();
while let Some(result) = stream.next().await {
match result.into_result() {
Ok(content) => last_content = content,
Err(errors) => {
for e in errors {
eprintln!("Error: {}", e);
}
}
}
}
println!("{}", last_content.text);
Multi-turn conversations
send() accepts the full message history. Include previous user and assistant messages
to maintain context:
let messages = vec![
Message {
from: EntityId::User,
content: MessageContent {
text: "What is Rust?".into(),
..Default::default()
},
..Default::default()
},
Message {
from: EntityId::Bot(BotId::new("gpt-4.1-nano")),
content: MessageContent {
text: "Rust is a systems programming language...".into(),
..Default::default()
},
..Default::default()
},
Message {
from: EntityId::User,
content: MessageContent {
text: "How does its borrow checker work?".into(),
..Default::default()
},
..Default::default()
},
];
let mut stream = client.send(&bot_id, &messages, &[]);
Attachments
OpenAiClient automatically handles attachments included in a message. Images are sent as
image_url content parts, PDFs as file uploads, and text-based files (.md, .html, .txt,
etc.) are decoded and inlined.
let attachment = Attachment::from_bytes(
"photo.png".into(),
Some("image/png".into()),
&image_bytes,
);
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
text: "Describe this image.".into(),
attachments: vec![attachment],
..Default::default()
},
..Default::default()
}];
Tool calls
To let the model invoke tools, pass a list of Tool definitions as the third argument
to send(). When the model decides to call a tool, the streamed MessageContent will
contain tool_calls instead of (or in addition to) text.
Defining tools
use std::sync::Arc;
let tool = Tool {
name: "get_weather".into(),
description: Some("Get current weather for a location".into()),
input_schema: Arc::new(serde_json::from_str(r#"{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}"#).unwrap()),
};
Handling tool calls and sending results
When the model returns tool calls, you execute the tools yourself and send the results back in a follow-up message:
let mut stream = client.send(&bot_id, &messages, &[tool]);
while let Some(result) = stream.next().await {
if let Some(content) = result.into_value() {
if !content.tool_calls.is_empty() {
// The model wants to call tools. Execute them and continue.
for tc in &content.tool_calls {
println!("Tool call: {} with args {:?}", tc.name, tc.arguments);
}
// Build a message with the tool results.
let tool_result_message = Message {
from: EntityId::Tool,
content: MessageContent {
tool_results: vec![ToolResult {
tool_call_id: content.tool_calls[0].id.clone(),
content: r#"{"temp": "22C", "condition": "sunny"}"#.into(),
is_error: false,
}],
..Default::default()
},
..Default::default()
};
// Append to history and send again so the model can
// produce a final answer using the tool output.
}
}
}
Listing available models
All clients implement bots() which fetches the list of available models from the
configured endpoint:
let result = client.bots().await;
if let Some(bots) = result.value() {
for bot in bots {
println!("{}: {}", bot.id, bot.name);
}
}
Reasoning models
OpenAiClient automatically extracts reasoning/thinking content from models that support
it. The reasoning text is placed in content.reasoning and stripped from content.text.
This works with:
- Models that use
<think>/</think>tags in the response text. - Providers that return a
reasoningorreasoning_contentfield in the streaming delta.
Image Generation
OpenAiImageClient generates images using the OpenAI-compatible /images/generations
endpoint. It follows the same BotClient interface as all other clients: you send a
message and receive a MessageContent back, this time with an image Attachment.
Feature flag: api-clients
Setup
use aitk::prelude::*;
let mut client = OpenAiImageClient::new("https://api.openai.com/v1".into());
client.set_key("your-api-key").unwrap();
Generating an image
The prompt is taken from the text field of the last message. The response yields a
single MessageContent containing the generated image as an Attachment:
use futures::StreamExt;
let bot_id = BotId::new("dall-e-3");
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
text: "A dragonfly perched on a blade of grass, watercolor style".into(),
..Default::default()
},
..Default::default()
}];
let mut stream = client.send(&bot_id, &messages, &[]);
while let Some(result) = stream.next().await {
if let Some(content) = result.into_value() {
for attachment in &content.attachments {
println!("Got image: {}", attachment.name);
// Save the attachment to a file.
attachment.save().await;
}
}
}
The client handles both base64 and URL responses transparently. In both cases the image
bytes are available through the Attachment API.
Attachment includes convenience methods like save() that interact directly with the
operating system (e.g. opening a save dialog on desktop, triggering a download on web).
These are provided for pragmatism but may change in a future version.
Speech-to-Text
OpenAiSttClient transcribes audio files using the OpenAI-compatible
/audio/transcriptions endpoint. Like all clients, it implements BotClient and uses
send() as its entry point.
Feature flag: api-clients
Setup
use aitk::prelude::*;
let mut client = OpenAiSttClient::new("https://api.openai.com/v1".into());
client.set_key("your-api-key").unwrap();
Transcribing audio
The audio must be provided as an Attachment on the last message. The response text
contains the transcription:
use futures::StreamExt;
let audio_bytes: Vec<u8> = std::fs::read("recording.mp3").unwrap();
let attachment = Attachment::from_bytes(
"recording.mp3".into(),
Some("audio/mpeg".into()),
&audio_bytes,
);
let bot_id = BotId::new("whisper-1");
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
attachments: vec![attachment],
..Default::default()
},
..Default::default()
}];
let mut stream = client.send(&bot_id, &messages, &[]);
while let Some(result) = stream.next().await {
if let Some(content) = result.into_value() {
println!("Transcription: {}", content.text);
}
}
The client sends the audio as a multipart form upload with the model ID and the file.
Realtime
OpenAiRealtimeClient provides WebSocket-based communication for real-time audio
interactions using the OpenAI Realtime API. It supports bidirectional audio streaming,
voice activity detection, and function calling during a live session.
Feature flag: realtime-clients
The realtime client differs from the other clients in that send() returns a
MessageContent containing an Upgrade::Realtime with channels for sending commands
and receiving events, rather than streaming text content directly.
Router Client
RouterClient aggregates multiple BotClient implementations into a single client.
It routes requests to the correct sub-client based on a prefix in the BotId, letting
you work with models from different clients through one unified interface.
No additional feature flag is required -- RouterClient is always available.
Creating a router
use aitk::prelude::*;
let router = RouterClient::new();
Adding sub-clients
Each sub-client is registered under a string key. This key becomes the prefix used to route requests:
let mut openai = OpenAiClient::new("https://api.openai.com/v1".into());
openai.set_key("your-openai-key").unwrap();
let mut ollama = OpenAiClient::new("http://localhost:11434/v1".into());
router.insert_client("openai", Box::new(openai));
router.insert_client("ollama", Box::new(ollama));
Bot ID prefixing
When you call bots() on a RouterClient, it fetches bots from all sub-clients and
prefixes each BotId with the sub-client's key and a / separator.
For example, if the "openai" sub-client reports a bot with ID gpt-4.1, the router
will expose it as openai/gpt-4.1.
If you forward bot IDs returned from bots() directly to send(), the routing is
automatic. If you construct BotIds manually, use the helper methods:
// Prefix manually.
let prefixed = RouterClient::prefix("openai", &BotId::new("gpt-4.1"));
assert_eq!(prefixed.as_str(), "openai/gpt-4.1");
// Unprefix to get the key and original ID.
let (key, original) = RouterClient::unprefix(&prefixed).unwrap();
assert_eq!(key, "openai");
assert_eq!(original.as_str(), "gpt-4.1");
Caching
RouterClient caches the result of bots() for each sub-client. The cache is
populated on the first call and reused on subsequent calls, unless the cached result
contains errors (in which case it retries automatically).
To force a refresh:
// Invalidate all sub-clients.
router.invalidate_all_bots_cache();
// Or a specific one.
router.invalidate_bots_cache("openai");
Accessing sub-clients
You can read or mutate a sub-client after registration:
// Immutable access.
router.read_client("openai", |client| {
// Use `client` here.
});
// Mutable access.
router.write_client("openai", |client| {
// Modify `client` here.
});
// Remove a sub-client entirely.
router.remove_client("ollama");
Full example
use aitk::prelude::*;
use futures::StreamExt;
let router = RouterClient::new();
let mut openai = OpenAiClient::new("https://api.openai.com/v1".into());
openai.set_key("sk-...").unwrap();
router.insert_client("openai", Box::new(openai));
let mut ollama = OpenAiClient::new("http://localhost:11434/v1".into());
router.insert_client("ollama", Box::new(ollama));
// List all models across both clients.
let mut router_clone = router.clone();
let result = router_clone.bots().await;
if let Some(bots) = result.value() {
for bot in bots {
println!("{}: {}", bot.id, bot.name);
// e.g. "openai/gpt-4.1: GPT-4.1"
// e.g. "ollama/llama3: Llama 3"
}
}
// Send to a specific client's model.
let bot_id = BotId::new("openai/gpt-4.1-nano");
let messages = vec![Message {
from: EntityId::User,
content: MessageContent {
text: "Hello!".into(),
..Default::default()
},
..Default::default()
}];
let mut stream = router_clone.send(&bot_id, &messages, &[]);
let mut last_content = MessageContent::default();
while let Some(result) = stream.next().await {
if let Some(content) = result.into_value() {
last_content = content;
}
}
println!("{}", last_content.text);
Simple Usage
ChatController is entirely optional. You can build chat applications using the clients
directly -- the controller is simply a convenience for the common case. It provides
reusable business logic for managing conversation state, streaming responses, and loading
models, without tying you to any UI framework.
This chapter shows how to use it with a single model you already know the ID of.
Required features: api-clients, async-rt
Overview
ChatController is held inside an Arc<Mutex<...>>. You interact with it by:
- Dispatching mutations to change state (select a bot, push a message, etc.).
- Dispatching tasks to trigger async operations (send messages, load models).
- Registering plugins to get notified of state changes and integrate with your UI or other systems.
Minimal example
use aitk::prelude::*;
use std::sync::{Arc, Mutex};
// 1. Create and configure a client.
let mut client = OpenAiClient::new("https://api.openai.com/v1".into());
client.set_key("your-api-key").unwrap();
// 2. Build the controller.
let controller: Arc<Mutex<ChatController>> = ChatController::builder()
.with_client(client)
.with_basic_spawner()
.build_arc();
// 3. Select a model.
controller
.lock()
.unwrap()
.dispatch_mutation(ChatStateMutation::SetBotId(Some(BotId::new("gpt-4.1-nano"))));
// 4. Push a user message and send.
{
let mut c = controller.lock().unwrap();
c.dispatch_mutation(VecMutation::Push(Message {
from: EntityId::User,
content: MessageContent {
text: "Hello!".into(),
..Default::default()
},
..Default::default()
}));
c.dispatch_task(ChatTask::Send);
}
After calling dispatch_task(ChatTask::Send), the controller:
- Pushes an empty bot message (with
is_writing: truemetadata) into state. - Calls
send()on the configured client with the full message history. - Updates the last message in state as streaming chunks arrive.
- Clears the streaming flag when done.
The builder
ChatControllerBuilder provides a fluent API for construction:
let controller = ChatController::builder()
.with_client(client) // Set the BotClient
.with_basic_spawner() // Cross-platform spawner (async-rt feature)
.with_plugin_append(plugin) // Register plugins
.build_arc(); // Produces Arc<Mutex<ChatController>>
The controller needs a spawner to run async tasks. with_basic_spawner() uses the
built-in cross-platform spawner from the async-rt feature, but you can provide any
custom spawner that implements the Spawner trait if you prefer a different runtime.
You can also create the controller manually and configure it step by step:
let controller = ChatController::new_arc();
{
let mut c = controller.lock().unwrap();
c.set_client(Some(Box::new(client)));
c.set_basic_spawner();
c.append_plugin(plugin);
}
State
The controller's state is accessible via controller.lock().unwrap().state() and
contains:
| Field | Type | Description |
|---|---|---|
messages | Vec<Message> | The full conversation history. |
bots | Vec<Bot> | Models loaded from the client (see Advanced Usage). |
load_status | Status | Status of the model loading operation. |
bot_id | Option<BotId> | The model used when dispatching the Send task. |
Mutations
State changes happen through ChatStateMutation:
// Select a bot.
c.dispatch_mutation(ChatStateMutation::SetBotId(Some(bot_id)));
// Push a message.
c.dispatch_mutation(VecMutation::Push(message));
// Clear messages.
c.dispatch_mutation(VecMutation::<Message>::Clear);
VecMutation<Message> and VecMutation<Bot> are automatically converted into
ChatStateMutation via From, so you can pass them directly to dispatch_mutation.
When multiple mutations should be applied as a batch, use dispatch_mutations with a
Vec. Individual on_state_mutation plugin hooks fire for each mutation, but
on_state_ready fires only once at the end of the batch.
Tasks
ChatTask represents async operations:
| Task | Description |
|---|---|
Send | Sends the current message history to the selected bot and streams the response. |
Stop | Interrupts the current streaming operation. |
Load | Fetches the list of available models from the client. |
Execute(tool_calls, bot_id) | Executes MCP tool calls (requires the mcp feature). |
Plugins
A plugin implements ChatControllerPlugin and is registered via append_plugin or
prepend_plugin. Plugins receive callbacks for state changes and can intercept tasks.
Example: UI repaint trigger
The simplest plugin notifies your UI framework when state changes:
struct RepaintPlugin {
// Your framework's handle for requesting repaints.
ctx: UiContext,
}
impl ChatControllerPlugin for RepaintPlugin {
fn on_state_ready(&mut self, _state: &ChatState, _mutations: &[ChatStateMutation]) {
self.ctx.request_repaint();
}
}
Example: streaming text to stdout
A plugin that forwards streaming updates to the terminal:
use std::sync::mpsc::Sender;
struct CliPlugin {
tx: Sender<String>,
}
impl ChatControllerPlugin for CliPlugin {
fn on_state_mutation(&mut self, mutation: &ChatStateMutation, state: &ChatState) {
let ChatStateMutation::MutateMessages(mutation) = mutation else {
return;
};
for effect in mutation.effects(&state.messages) {
if let VecEffect::Update(index, _old, new) = effect {
if index == state.messages.len() - 1
&& new.from != EntityId::User
{
self.tx.send(new.content.text.clone()).unwrap();
}
}
}
}
}
Plugin hooks
| Hook | When it fires |
|---|---|
on_state_ready(state, mutations) | After all batched mutations are applied. Use this for UI updates. |
on_state_mutation(mutation, state) | For each individual mutation, with the pre-mutation state. Useful for fine-grained change tracking. |
on_task(task) -> ChatControl | Before a task executes. Return ChatControl::Stop to cancel it. |
Reading state in a UI loop
A typical pattern in a GUI framework is to lock the controller, read its state, and render:
let controller = controller.lock().unwrap();
let state = controller.state();
for message in &state.messages {
match &message.from {
EntityId::User => render_user_message(&message.content.text),
EntityId::Bot(_) => render_bot_message(&message.content.text),
_ => {}
}
}
If your UI framework allows it, prefer locking the controller once per render pass to read all the state you need, rather than locking and unlocking repeatedly. This avoids unnecessary contention and ensures a consistent view of the state for the entire frame.
Advanced Usage
This chapter builds on the simple usage guide and covers using
ChatController with a RouterClient for multiple clients and dynamic model loading.
Router + ChatController
Instead of hardcoding a single client, you can use RouterClient to aggregate
multiple clients and let the user pick a model at runtime.
use aitk::prelude::*;
// Set up clients.
let mut openai = OpenAiClient::new("https://api.openai.com/v1".into());
openai.set_key("sk-...").unwrap();
let ollama = OpenAiClient::new("http://localhost:11434/v1".into());
// Compose them into a router.
let router = RouterClient::new();
router.insert_client("openai", Box::new(openai));
router.insert_client("ollama", Box::new(ollama));
// Build the controller with the router as its client.
let controller = ChatController::builder()
.with_client(router)
.with_basic_spawner()
.build_arc();
Loading models dynamically
With a router (or any client that reports models), you can ask the controller to fetch the list at runtime:
controller.lock().unwrap().dispatch_task(ChatTask::Load);
This triggers an async call to bots() on the configured client. When it completes,
the controller updates its state:
state.botscontains the loaded models.state.load_statusreflects the outcome (Working,Success, orError).- If there were errors, they appear as messages in
state.messages.
You can then let the user select a model and set it:
let c = controller.lock().unwrap();
let state = c.state();
if state.load_status.is_success() {
for bot in &state.bots {
println!("{}: {}", bot.id, bot.name);
// e.g. "openai/gpt-4.1: GPT-4.1"
}
}
// After the user picks a model:
controller
.lock()
.unwrap()
.dispatch_mutation(ChatStateMutation::SetBotId(Some(selected_bot_id)));
When using a RouterClient, the BotId values in state.bots are already prefixed
(e.g. openai/gpt-4.1). You can pass them directly to SetBotId without manual
prefixing.
Swapping clients at runtime
You can replace the controller's client after construction. This resets the bot list and load status:
let mut c = controller.lock().unwrap();
c.set_client(Some(Box::new(new_client)));
c.dispatch_task(ChatTask::Load);
Refreshing the model list
If you are using a RouterClient, you can invalidate its cache and re-trigger a load:
{
let c = controller.lock().unwrap();
if let Some(client) = c.bot_client() {
// Downcast or access the router if needed.
}
}
// Or simply set the client again, which resets state and lets you reload.
The router caches bots() results per sub-client and only retries sub-clients whose
previous result contained errors. Call invalidate_all_bots_cache() on the router
directly if you need a full refresh.
Intercepting tasks with plugins
A plugin can prevent a task from executing by returning ChatControl::Stop from
on_task. This is useful for adding confirmation dialogs or validation:
struct ConfirmPlugin;
impl ChatControllerPlugin for ConfirmPlugin {
fn on_task(&mut self, task: &ChatTask) -> ChatControl {
match task {
ChatTask::Send => {
// Add your validation logic here.
// Return ChatControl::Stop to prevent sending.
ChatControl::Continue
}
_ => ChatControl::Continue,
}
}
}
Accessing sub-clients through the router
If you need to reconfigure a specific sub-client (for example, to update an API key), you can access it through the router:
let mut c = controller.lock().unwrap();
if let Some(client) = c.bot_client_mut() {
// If you know the client is a RouterClient, you can downcast and access sub-clients.
// For simpler cases, consider holding a reference to the RouterClient directly.
}
A common pattern is to keep a clone of the RouterClient alongside the controller,
since RouterClient is Clone and internally reference-counted:
let router = RouterClient::new();
// ... insert sub-clients ...
let controller = ChatController::builder()
.with_client(router.clone())
.with_basic_spawner()
.build_arc();
// Later, modify sub-clients through `router` directly.
router.write_client("openai", |client| {
// Reconfigure the client.
});
router.invalidate_bots_cache("openai");
Implementing a Custom Client
This chapter is a placeholder. A detailed guide on implementing custom BotClient
implementations is coming soon.
All built-in clients implement the BotClient trait. You can implement it yourself to
support services that don't follow the OpenAI-compatible API, or to add custom logic
around AI interactions.
The BotClient trait
pub trait BotClient: Send {
fn send(
&mut self,
bot_id: &BotId,
messages: &[Message],
tools: &[Tool],
) -> BoxPlatformSendStream<'static, ClientResult<MessageContent>>;
fn bots(&mut self) -> BoxPlatformSendFuture<'static, ClientResult<Vec<Bot>>>;
fn clone_box(&self) -> Box<dyn BotClient>;
}
send()returns a stream ofClientResult<MessageContent>. Each yielded item should be a cumulative snapshot of the full response content built so far.bots()returns a future resolving to the list of available models.clone_box()enablesBox<dyn BotClient>to be cloned.
Any custom client you implement will work with RouterClient, ChatController, and
every other abstraction in aitk that accepts a BotClient.
Crate Documentation
The complete Rust API documentation generated by cargo doc is available here.