Get started in 30 seconds.
Three steps: load the bundle, mount the tag, and feed it Streaming UI Script text. Everything else (parsing, rendering, state, theming) is handled inside the shadow DOM.
1. Add the script tag
Drop a single ES module into any HTML page. The bundle is self-contained — it includes the parser, the runtime, the built-in components, and the styles, so no extra CSS file is required.
<script type="module" src="https://asfand-dev.github.io/streaming-ui-script/dist/streaming-ui-script.js"></script>
For non-module scripts (older bundlers, Webflow, etc.) use the IIFE build:
<script src="https://asfand-dev.github.io/streaming-ui-script/dist/streaming-ui-script.iife.js" defer></script>
2. Place the tag
<streaming-ui-script id="response" theme="light"></streaming-ui-script>
The element renders empty by default. Set its content with one of:
el.setResponse(text)— replace the program (one-shot rendering).el.appendChunk(text)— append a streaming chunk and re-render.el.response = "..."— equivalent tosetResponse.- Place the source text inside the tag as a fallback (used on connect).
3. Wire up your LLM
Send the system prompt with every request so the model knows the language
and the available components. You can either fetch the static
system_prompt.txt from your CDN or generate it programmatically.
// Fetch the canned prompt
const systemPrompt = await fetch("https://asfand-dev.github.io/streaming-ui-script/dist/system_prompt.txt").then(r => r.text());
// or build a prompt from the live library (e.g. with extra rules and tool descriptions)
const el = document.querySelector("streaming-ui-script");
const prompt = el.getSystemPrompt({
preamble: "You are an analytics assistant.",
additionalRules: ["Always end with a FollowUpBlock of 2 prompts."],
tools: [{ name: "list_orders", description: "Return recent orders.", argsExample: { limit: 10 } }],
});
Then stream the assistant response into the element:
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ system: systemPrompt, messages }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
el.streaming = true;
el.clear();
while (true) {
const { value, done } = await reader.read();
if (done) break;
el.appendChunk(decoder.decode(value, { stream: true }));
}
el.streaming = false;
4. (Optional) Provide tools
Streaming UI Script can call Query("tool_name", args) and
Mutation("tool_name", args). Register tools as plain functions on
the element so it can fetch fresh data on demand.
el.setTools({
list_orders: async ({ limit }) => {
const res = await fetch(`/api/orders?limit=${limit}`);
return res.json();
},
update_order: async ({ id, status }) => {
await fetch(`/api/orders/${id}`, { method: "PATCH", body: JSON.stringify({ status }) });
return { ok: true };
},
});
5. (Optional) Listen for events
Buttons that call @ToAssistant("...") dispatch an
assistant-message event. Wire that to your chat input to keep
the conversation flowing.
el.addEventListener("assistant-message", (event) => {
sendMessageToLLM(event.detail.message);
});
6. (Optional) Debug parse errors
Parse errors are silenced in the rendered UI by default so users see
clean output even when the LLM emits a partial or invalid line. Add
showerrors="true" while iterating on prompts to display
them inline:
<streaming-ui-script showerrors="true" theme="light"></streaming-ui-script>
The error event fires regardless of this attribute, so
production apps can keep the banner hidden and still log parse failures
programmatically.
That's it
You now have an LLM-driven UI runtime that streams components, supports two-way state binding, refreshes data on a schedule, and works in any framework.