Skip to content

Conversation

@ckl117
Copy link
Collaborator

@ckl117 ckl117 commented Dec 8, 2025

Motivation

每次根据batch 请求中的真实的logprob计算,相比每次按照最大20计算,端到端性能提升10%

Modifications

无改变

Usage or Command

无改变

Accuracy Tests

已存在

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Dec 8, 2025

Thanks for your contribution!

self.top_p_normalized_logprobs = True
self.prompt_logprobs_reqs: dict[str, Request] = {}
self.in_progress_prompt_logprobs: dict[str, LogprobsTensors] = {}
self.forward_batch_reqs_list: list[Request] = [None for _ in range(self.scheduler_config.max_num_seqs)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clear_requests中清理一下

Comment on lines +183 to +188
logprobs = d.get("logprobs", None)
if logprobs is not None:
if logprobs is True:
sampling_params.logprobs = d.get("top_logprobs", None)
elif logprobs is False:
sampling_params.logprobs = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

精简一下

Suggested change
logprobs = d.get("logprobs", None)
if logprobs is not None:
if logprobs is True:
sampling_params.logprobs = d.get("top_logprobs", None)
elif logprobs is False:
sampling_params.logprobs = None
logprobs = d.get("logprobs", None)
if logprobs:
sampling_params.logprobs = d.get("top_logprobs", None)
else:
sampling_params.logprobs = None

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logprobs可能为true、false和int值[-1, 0, 1, 2,....],chat接口需要将bool类型映射到数字或者None。

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 44.00000% with 14 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@d1bd40d). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/worker/gpu_model_runner.py 53.33% 5 Missing and 2 partials ⚠️
fastdeploy/engine/request.py 16.66% 4 Missing and 1 partial ⚠️
fastdeploy/model_executor/layers/sample/sampler.py 0.00% 0 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5430   +/-   ##
==========================================
  Coverage           ?   59.59%           
==========================================
  Files              ?      327           
  Lines              ?    40666           
  Branches           ?     6175           
==========================================
  Hits               ?    24233           
  Misses             ?    14555           
  Partials           ?     1878           
Flag Coverage Δ
GPU 59.59% <44.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants