Skip to content

V2.0 beta - Contents lost between turns within a dynamic workflow #5461

@leothirdopinion

Description

@leothirdopinion

Describe the Bug:

When a LlmAgent is run as an inner node via await ctx.run_node(agent, node_input) inside a dynamic Workflow, the first LLM call receives node_input as expected. After the agent invokes a tool (e.g. skill load) and the run continues to a second LLM call in the same run_node invocation, the model request no longer includes the original node_input text in LlmRequest.contents (LiteLLM/Bedrock path). Subsequent user-side content is dominated by tool results / synthetic “For context:” segments from ADK’s cross-author presentation (contents.py), so the model loses the initial structured input even though the agent uses include_contents="default".

Expected Behavior:

Across tool rounds within the same inner LlmAgent run, the conversation sent to the model should still include the initial user Content from run_node (or equivalent), consistent with include_contents="default".

Observed Behavior:

After the first tool response, the original node_input is absent from the assembled request contents (100% repro).

Environment Details:

  • ADK Library Version (pip show google-adk): v2.0.0b1
  • Desktop OS:** [e.g., macOS, Linux, Windows] Windows
  • Python Version (python -V): 3.14

Model Information:

  • Are you using LiteLLM: Yes
  • Which model is being used: (e.g., gemini-2.5-pro) Bedrock sonnet 4.6 / Haiku 4.6

How often has this issue occurred?:

  • Always (100%)

Metadata

Metadata

Assignees

Labels

v2Affects only 2.0 versionworkflow[Component] This issue is related to ADKworkflow

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions