Streaming and Async Tasks
Up to this point, the Guide has focused on synchronous, local, predictable interaction:
signalcomputedeffect- focus and routing
- the continuity contract
But many real terminal applications need something else:
- background work
- network requests
- system sampling
- AI / LLM output
- long results that arrive in chunks
In other words, the UI is not only reacting to immediate user input. It also needs to handle content that arrives over time.
This page explains how that should be structured in Ansiq.
The most important rule: do not mutate UI signals directly from background tasks
This is the key idea.
Ansiq’s reactive handles are thread-bound. They should not be captured and mutated directly inside arbitrary tokio::spawn tasks.
The stable pattern is:
- background tasks produce messages
- messages come back through
RuntimeHandle::emit(...) - the app updates its own state inside
update(...)
In other words:
async tasks bring new facts back to the app,
and the app decides how those facts become state and UI.
That boundary is what keeps streaming apps predictable.
Continue evolving the NotesApp
Earlier, NotesApp was still just an input-and-list application.
Now we push it one step further: after submitting a note, instead of producing one complete result immediately, a background task streams back a generated summary in several chunks.
First define the messages:
#[derive(Clone, Debug)]
enum Message {
Submit(String),
Chunk(String),
Finish,
}Then give the app a piece of state for the active streaming draft:
#[derive(Default)]
struct NotesApp {
notes: Vec<String>,
draft: String,
streaming: bool,
}Where async work starts
There are two common entry points in Ansiq:
- start a long-lived background task in
mount(...) - start a one-shot task in
update(...)as a reaction to a user action
For streaming output, the second one is often the right starting point: a submit action kicks off a task that emits several messages over time.
fn update(&mut self, message: Message, handle: &RuntimeHandle<Message>) {
match message {
Message::Submit(input) => {
self.streaming = true;
self.draft.clear();
let handle = handle.clone();
handle.spawn(async move {
for chunk in [
format!("Planning the change for {}", input),
"Inspecting the workspace...".to_string(),
"Writing the first failing test...".to_string(),
] {
let _ = handle.emit(Message::Chunk(chunk));
tokio::time::sleep(std::time::Duration::from_millis(120)).await;
}
let _ = handle.emit(Message::Finish);
});
}
Message::Chunk(chunk) => {
if !self.draft.is_empty() {
self.draft.push('\\n');
}
self.draft.push_str(&chunk);
}
Message::Finish => {
self.streaming = false;
}
}
}The important parts are:
- the background task does not hold a
signal - the background task only emits
Message - real state mutation still happens inside
update(...)
Real streams usually look like this
The for chunk in [...] example above is only there to make the pattern obvious first.
In real AI, agent, or network-driven applications, the more typical shape is consuming an actual stream:
use tokio_stream::StreamExt;
fn update(&mut self, message: Message, handle: &RuntimeHandle<Message>) {
match message {
Message::Submit(input) => {
self.streaming = true;
self.draft.clear();
let handle = handle.clone();
handle.spawn(async move {
let mut stream = open_llm_stream(input).await;
while let Some(chunk) = stream.next().await {
let _ = handle.emit(Message::Chunk(chunk));
}
let _ = handle.emit(Message::Finish);
});
}
Message::Chunk(chunk) => {
if !self.draft.is_empty() {
self.draft.push('\n');
}
self.draft.push_str(&chunk);
}
Message::Finish => {
self.streaming = false;
}
}
}This is the common AI-agent path:
- consume a real stream inside
tokio::spawn - emit
Message::Chunk(...)for every piece - emit one final
Message::Finish
The key thing is not which SDK provides open_llm_stream(...). The key thing is the boundary:
stream.next().await -> emit(Message) -> update(...) -> state
How the UI shows active streaming output
At render time, draft is just ordinary state:
fn render(&mut self, cx: &mut ViewCtx<'_, Message>) -> Element<Message> {
let input = cx.signal(|| String::new());
let current = input.get();
view! {
<Paragraph text={"Write a note and press Enter"} />
<Input
value={current.clone()}
on_change={{
let input = input.clone();
move |next| input.set_if_changed(next)
}}
on_submit={|next| Some(Message::Submit(next))}
/>
<Paragraph
text={
if self.streaming {
format!("Streaming...\\n\\n{}", self.draft)
} else if self.draft.is_empty() {
"No active draft".to_string()
} else {
self.draft.clone()
}
}
/>
}
}At this point the streaming model is already clear:
- a user action starts a task
- the task emits
ChunkandFinish - the app accumulates those chunks into
draft - the runtime performs localized screen updates
Why this boundary matters
Streaming systems often become unstable for one simple reason: background work starts mutating UI state directly.
Ansiq avoids that by keeping the pipeline explicit:
async task
-> Message::Chunk / Message::Finish
-> app.update(...)
-> state changes
-> runtime performs partial updatesAs a flow:
Once you keep that boundary intact, streaming becomes much easier to reason about.
What effect is good for in streaming scenarios
effect is not the main transport for streaming output, but it is still useful around the edges:
- logging when
streamingflips fromfalsetotrue - reacting when a draft becomes empty again
- syncing non-UI side effects
So the rule is:
- messages are the main streaming channel
- effect is a reactive side-effect hook
When history enters the picture
At this stage, draft is still only live content.
Once streaming finishes, you usually need to decide:
- should it remain in the live viewport
- should it enter a history list
- should it be committed into terminal scrollback
That is why streaming eventually meets viewport/history semantics.
In a small example, keeping the result in current UI state is enough. In transcript- or agent-style applications, the next step is usually to introduce:
- completed turns
- history blocks
- scrollback commits
When mount(...) is a better fit
Not every async source is a one-shot task started by user input.
If your data source is long-lived, for example:
- system sampling
- file watching
- long-lived connections
- event streams
then mount(...) is usually the better place to start the task.
So:
update(...)is a good place for action-triggered async workmount(...)is a good place for ongoing background work
The real conclusion of this page
In Ansiq, streaming is not a widget trick. It is an explicit data flow:
- async tasks emit messages
- the app consumes messages and updates state
- the runtime turns that into localized terminal updates
If you are building AI or agent-style terminal software, that is the main pattern to internalize:
- user action enters
update(...) update(...)starts background work- background work consumes a real stream
- every chunk returns via
handle.emit(...) - the app renders it from ordinary state
- the runtime performs localized refresh
If you keep that boundary intact, you can build:
- chunked loading
- realtime sampling
- LLM output
- incrementally updated transcripts
- without collapsing the runtime model.
Next: Layout and Rendering.
The next page returns to runtime behavior and explains why these updates do not automatically degenerate into full-tree redraws.