Skip to content

Conversation

@faaany
Copy link

@faaany faaany commented Dec 4, 2025

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@faaany faaany changed the title [Feat] support XPU Backend in vLLM-Omni [Feat] Support XPU Backend in vLLM-Omni Dec 4, 2025
@congw729
Copy link
Contributor

congw729 commented Dec 4, 2025

Pls commit your editions using git commit -s

ywang96 and others added 5 commits December 4, 2025 01:39
Signed-off-by: Fanli Lin <[email protected]>
Signed-off-by: Fanli Lin <[email protected]>
@ywang96
Copy link
Member

ywang96 commented Dec 4, 2025

@faaany Hey thanks for your contribution! I just want to emphasize that the project is still in a fast-iterating and developing phase and IMO it's not in a shape where other hardware can start integrating into the project. Happy to discuss offline with Intel folks on this!

@jikunshang
Copy link

@faaany Hey thanks for your contribution! I just want to emphasize that the project is still in a fast-iterating and developing phase and IMO it's not in a shape where other hardware can start integrating into the project. Happy to discuss offline with Intel folks on this!

hi @ywang96 , understand current situation. We mark this PR as a draft in case anyone what to take a try on Intel hardware. Intel will keep monitor vllm-omni and will also help refactor for multi hardware backend support on both vllm and vllm-omni.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants