fix(models): forward sampling params and populate finish_reason in AnthropicLlm#5441
Open
SuperMarioYL wants to merge 1 commit intogoogle:mainfrom
Open
Conversation
…thropicLlm - Forward temperature, top_p, top_k, stop_sequences from LlmRequest.config to both the non-streaming and streaming Anthropic messages.create calls. Previously these parameters were silently ignored, making it impossible to control generation from the ADK config interface. - Populate finish_reason on LlmResponse for both non-streaming (uncomment the existing helper call) and streaming paths (capture stop_reason from the message_delta event). - Extend to_google_genai_finish_reason() with the two missing mappings: pause_turn -> STOP (extended thinking turn pause) and refusal -> SAFETY (model declined the request). Fixes google#5393, google#5394
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Collaborator
|
Response from ADK Triaging Agent Hello @SuperMarioYL, thank you for your contribution! It looks like the Contributor License Agreement (CLA) check is failing. Before we can merge this PR, you'll need to sign the CLA. Please check the details of the Thanks! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to existing issues:
Testing Plan
Unit Tests:
Manual End-to-End (E2E) Tests:
These fixes address configuration parameters that are forwarded to the Anthropic SDK. The behavior is fully covered by unit tests using mocked
messages.createcalls. To manually verify:Summary of Changes
Issue #5393 — sampling params silently dropped:
temperature,top_p,top_k,stop_sequencesfromllm_request.configbefore both streaming and non-streaming pathsNOT_GIVENwhen absent/None) to bothmessages.createcallsLiteLlmalready handles these correctly;AnthropicLlmnow matches that behaviorIssue #5394 — finish_reason never populated:
# TODO: Deal with these later.comment block and callto_google_genai_finish_reason(message.stop_reason)stop_reason: Optional[str] = None, capture it fromevent.delta.stop_reasonin themessage_deltahandler, and pass to the finalLlmResponsestop_reasonmappings:pause_turn→STOP(extended-thinking turn pause) andrefusal→SAFETYChecklist