Fix I32x4 min/max working incorrectly for large values#589
Open
Shnatsel wants to merge 1 commit intoservo:mainfrom
Open
Fix I32x4 min/max working incorrectly for large values#589Shnatsel wants to merge 1 commit intoservo:mainfrom
Shnatsel wants to merge 1 commit intoservo:mainfrom
Conversation
Member
|
this looks related to #583 and I suspect we can also revert test changes.
But the PR/code was not done by AI, right? |
Author
|
The regression test/PoC is AI-written. I always have models write a PoC that fails miri before looking into the issue. The production code change is human. Feel free to drop the test if you have a policy against including AI contributions. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The ARM I32x4::{min,max} implementations must compare integer lanes directly. Converting through f32 loses precision above 2^24, so values such as 16_777_217 round to neighboring integers and can produce a numerically wrong vector.
This PR fixes the issue and adds a regression test. The issue was spotted by GPT-5.5 xhigh.