Skip to content

Conversation

@yizhuoz004
Copy link
Collaborator

No description provided.

//===----------------------------------------------------------------------===//

bool tensorrt::AttentionOp::isValidForTensorRTVersion(
int64_t trtMajorVersion) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also check the minor version here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christopherbate Could you advise if there's a convenient way to do this? Thanks!

@yizhuoz004 yizhuoz004 force-pushed the mlir-trt-attention branch 4 times, most recently from cffa3a2 to dcfc0c7 Compare November 14, 2025 00:33
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 8, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Fix Attention addLayer, make cmake to work with TRT 10.14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants