Add openai_agents streaming sample#301
Draft
jssmith wants to merge 2 commits intotemporalio:mainfrom
Draft
Conversation
Demonstrates buffered token streaming for OpenAI Agents-backed workflows via temporalio.contrib.workflow_streams (experimental, contrib/pubsub branch of sdk-python). The OpenAI Agents plugin's ModelActivityParameters carries a streaming_event_topic; the model activity publishes raw stream events to that topic with a configurable flush interval (default 100ms), and the workflow emits a sentinel on a "done" topic when Runner.run_streamed finishes. Subscribers iterate (events, done) and break on the sentinel — race_with_workflow handles the case where the workflow fails before publishing the sentinel. Two scenarios: - stream_text: text-delta events from a simple haiku agent - stream_items: agent-update / handoff / tool-call events across a multi-agent workflow with a joke-rating activity
run_stream_items_workflow: print the workflow's final result after the streamed events render — matches run_stream_text_workflow and makes streamed-vs-final parity visible.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds an
openai_agents/streaming/sample demonstrating buffered token streaming for OpenAI Agents-backed workflows viatemporalio.contrib.workflow_streams(experimental,contrib/pubsubbranch of sdk-python).The OpenAI Agents plugin's
ModelActivityParameterscarries astreaming_event_topic; the model activity publishes rawTResponseStreamEvents to that topic, batched overstreaming_event_batch_interval(default 100ms). The workflow emits a sentinel on adonetopic whenRunner.run_streamedfinishes; subscribers iterate(events, done)and break on the sentinel.race_with_workflowhandles the case where the workflow fails before publishing the sentinel.Two scenarios:
stream_text— text-delta events from a simple haiku agentstream_items— agent-update / handoff / tool-call events across a multi-agent workflow with a joke-rating activityThis is the second half of #299. Independent of the workflow_streams basics (separate PR), though both target the same sdk-python contrib branch.
Test plan
contrib/pubsuband install into the samples uv environmentOPENAI_API_KEY=... uv run openai_agents/streaming/run_worker.pyuv run openai_agents/streaming/run_stream_text_workflow.py— verify text deltas print as small burstsuv run openai_agents/streaming/run_stream_items_workflow.py— verify agent-update / tool-call / message-output events render in order