Skip to content

Add openai_agents streaming sample#301

Draft
jssmith wants to merge 2 commits intotemporalio:mainfrom
jssmith:openai-agents-streaming-sample
Draft

Add openai_agents streaming sample#301
jssmith wants to merge 2 commits intotemporalio:mainfrom
jssmith:openai-agents-streaming-sample

Conversation

@jssmith
Copy link
Copy Markdown
Contributor

@jssmith jssmith commented Apr 30, 2026

Summary

Adds an openai_agents/streaming/ sample demonstrating buffered token streaming for OpenAI Agents-backed workflows via temporalio.contrib.workflow_streams (experimental, contrib/pubsub branch of sdk-python).

The OpenAI Agents plugin's ModelActivityParameters carries a streaming_event_topic; the model activity publishes raw TResponseStreamEvents to that topic, batched over streaming_event_batch_interval (default 100ms). The workflow emits a sentinel on a done topic when Runner.run_streamed finishes; subscribers iterate (events, done) and break on the sentinel. race_with_workflow handles the case where the workflow fails before publishing the sentinel.

Two scenarios:

  • stream_text — text-delta events from a simple haiku agent
  • stream_items — agent-update / handoff / tool-call events across a multi-agent workflow with a joke-rating activity

This is the second half of #299. Independent of the workflow_streams basics (separate PR), though both target the same sdk-python contrib branch.

Test plan

  • Build sdk-python from contrib/pubsub and install into the samples uv environment
  • OPENAI_API_KEY=... uv run openai_agents/streaming/run_worker.py
  • uv run openai_agents/streaming/run_stream_text_workflow.py — verify text deltas print as small bursts
  • uv run openai_agents/streaming/run_stream_items_workflow.py — verify agent-update / tool-call / message-output events render in order

Demonstrates buffered token streaming for OpenAI Agents-backed
workflows via temporalio.contrib.workflow_streams (experimental,
contrib/pubsub branch of sdk-python). The OpenAI Agents plugin's
ModelActivityParameters carries a streaming_event_topic; the model
activity publishes raw stream events to that topic with a
configurable flush interval (default 100ms), and the workflow
emits a sentinel on a "done" topic when Runner.run_streamed
finishes. Subscribers iterate (events, done) and break on the
sentinel — race_with_workflow handles the case where the workflow
fails before publishing the sentinel.

Two scenarios:
- stream_text: text-delta events from a simple haiku agent
- stream_items: agent-update / handoff / tool-call events across
  a multi-agent workflow with a joke-rating activity
@jssmith jssmith requested a review from a team as a code owner April 30, 2026 03:34
@jssmith jssmith mentioned this pull request Apr 30, 2026
@jssmith jssmith marked this pull request as draft April 30, 2026 03:37
run_stream_items_workflow: print the workflow's final result after
the streamed events render — matches run_stream_text_workflow and
makes streamed-vs-final parity visible.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant