Fix #8802: read nda_croppeds from input dict when pre-computed#8809
Fix #8802: read nda_croppeds from input dict when pre-computed#8809williams145 wants to merge 7 commits intoProject-MONAI:devfrom
Conversation
…mputed ImageStats.__call__ crashed with UnboundLocalError when a caller pre-populated "nda_croppeds" in the input dict. The variable was only assigned in the if-branch but referenced unconditionally on both paths. Added the missing else-branch to read the pre-computed value from the dict, and wrapped the method body in try/finally to guarantee grad state is always restored on exit. Fixes Project-MONAI#8802
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughImageStats.call was refactored to remove upfront inline validation and instead perform channel extraction, optional reuse or per-channel computation of Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 260-264: The code uses caller-supplied d["nda_croppeds"] without
validation which can yield incorrect per-channel stats; in the block handling
nda_croppeds (the if/else around "nda_croppeds" and get_foreground_image),
verify that d["nda_croppeds"] is a sequence with the same length as ndas and
that each element is a valid array-like (e.g., numpy ndarray) with expected
shape/dtype; if the check fails, fall back to recomputing nda_croppeds =
[get_foreground_image(nda) for nda in ndas]; update the branch that assigns
nda_croppeds so it validates d["nda_croppeds"] before using it and documents the
fallback behavior.
In `@tests/apps/test_auto3dseg.py`:
- Around line 554-570: The test test_analyzer_grad_state_restored_after_call
currently mutates global torch grad mode without guaranteeing restoration on
exceptions; wrap the two analyzer(data) calls in a try/finally: capture the
original state with orig = torch.is_grad_enabled(), set the required state for
each subcase with torch.set_grad_enabled(True/False), call analyzer(data),
assert the state, and in the finally restore torch.set_grad_enabled(orig) so the
global grad mode is always returned (update/remove the trailing
torch.set_grad_enabled(True) in favor of the finally restore).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 967bb49c-5419-4036-be0a-b25f2e9d6a47
📒 Files selected for processing (2)
monai/auto3dseg/analyzer.pytests/apps/test_auto3dseg.py
tests/apps/test_auto3dseg.py
Outdated
| def test_analyzer_grad_state_restored_after_call(self): | ||
| # Verify that ImageStats.__call__ always restores the grad-enabled state it found | ||
| # on entry, regardless of which state that was. | ||
| analyzer = ImageStats(image_key="image") | ||
| image = torch.rand(1, 10, 10, 10) | ||
| data = {"image": MetaTensor(image)} | ||
|
|
||
| # grad enabled before call → must still be enabled after | ||
| torch.set_grad_enabled(True) | ||
| analyzer(data) | ||
| assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call" | ||
|
|
||
| # grad disabled before call → must still be disabled after | ||
| torch.set_grad_enabled(False) | ||
| analyzer(data) | ||
| assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call" | ||
| torch.set_grad_enabled(True) # restore for subsequent tests |
There was a problem hiding this comment.
Make grad-state cleanup exception-safe in the test.
The test mutates global grad mode; restore the original state in finally to prevent cross-test leakage on failures.
Proposed fix
def test_analyzer_grad_state_restored_after_call(self):
# Verify that ImageStats.__call__ always restores the grad-enabled state it found
# on entry, regardless of which state that was.
analyzer = ImageStats(image_key="image")
image = torch.rand(1, 10, 10, 10)
data = {"image": MetaTensor(image)}
-
- # grad enabled before call → must still be enabled after
- torch.set_grad_enabled(True)
- analyzer(data)
- assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
-
- # grad disabled before call → must still be disabled after
- torch.set_grad_enabled(False)
- analyzer(data)
- assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
- torch.set_grad_enabled(True) # restore for subsequent tests
+ original_grad_state = torch.is_grad_enabled()
+ try:
+ # grad enabled before call → must still be enabled after
+ torch.set_grad_enabled(True)
+ analyzer(data)
+ assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+
+ # grad disabled before call → must still be disabled after
+ torch.set_grad_enabled(False)
+ analyzer(data)
+ assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+ finally:
+ torch.set_grad_enabled(original_grad_state)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/apps/test_auto3dseg.py` around lines 554 - 570, The test
test_analyzer_grad_state_restored_after_call currently mutates global torch grad
mode without guaranteeing restoration on exceptions; wrap the two analyzer(data)
calls in a try/finally: capture the original state with orig =
torch.is_grad_enabled(), set the required state for each subcase with
torch.set_grad_enabled(True/False), call analyzer(data), assert the state, and
in the finally restore torch.set_grad_enabled(orig) so the global grad mode is
always returned (update/remove the trailing torch.set_grad_enabled(True) in
favor of the finally restore).
atharvajoshi01
left a comment
There was a problem hiding this comment.
Two fixes in one: reading precomputed nda_croppeds from the dict (the actual bug) and wrapping the whole block in try/finally to restore grad state on exception. Both are correct. The else branch on line ~261 was the missing piece from the original code.
…h@gmail.com> I, UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>, hereby add my Signed-off-by to this commit: 2774923 Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
…and add test docstrings - Validate pre-computed nda_croppeds is a list with one entry per channel, raising ValueError with a clear message if not - Convert inline comments to docstrings on test methods to satisfy docstring coverage CI check - Wrap grad-disabled test leg in try/finally so global state is always restored Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
|
The quick-py3 (macOS-latest) failure is a pre-existing infrastructure issue, pytype 2024.4.11 fails to install on the macOS runner due to a missing pybind11 dependency, which is unrelated to the changes in this PR. The same failure appears on other open PRs against dev for the same reason. |
Note: PR #8803 by @bluehyena covers overlapping ground on issue #8802. The key difference in this PR is explicit validation of the pre-computed nda_croppeds value, wrong type or mismatched channel count raises a clear ValueError rather than propagating silently to a confusing downstream error. |
…t in ImageStats.__call__ The docstring was positioned after input validation code that was added in a prior commit, preventing Python from recognising it as a docstring and causing the docstring coverage check to fail at 40%. Moves the docstring to the first statement of __call__ and removes the out-of-scope input validation block. The core fix (else branch reading nda_croppeds from the input dict with type/length validation, and the try/finally grad-state guard) is unchanged. Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 248-253: Update the function/method docstring that validates
nda_croppeds/ndas to include ValueError in the Raises section: the code now
raises ValueError when nda_croppeds is not a list/tuple matching the length of
ndas (see variables nda_croppeds and ndas in the validation block), so add a
short entry documenting this ValueError alongside the existing RuntimeError
entry in the Google-style "Raises" section.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 61824ad3-8244-4f80-b20f-a13365f70ad9
📒 Files selected for processing (1)
monai/auto3dseg/analyzer.py
…DataAnalyzer methods Docstring coverage was at 50% because the existing test methods in TestDataAnalyzer had no docstrings. Added one-line docstrings to all 20 undocumented methods and the class itself to satisfy the 80% coverage threshold. Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tests/apps/test_auto3dseg.py (1)
577-599:⚠️ Potential issue | 🟡 MinorRestore grad mode to the original entry state in one outer
finally.Line 587 and Line 593 mutate global grad mode, but cleanup only restores to
Trueand only for the second subcase. Use one outertry/finallyand restore the original state.Proposed fix
def test_analyzer_grad_state_restored_after_call(self): """Verify ImageStats restores torch grad-enabled state on both normal and disabled entry. Checks that the try/finally guard correctly restores the state regardless of whether grad was enabled or disabled before the call. """ analyzer = ImageStats(image_key="image") image = torch.rand(1, 10, 10, 10) data = {"image": MetaTensor(image)} - # grad enabled before call → must still be enabled after - torch.set_grad_enabled(True) - analyzer(data) - assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call" - - # grad disabled before call → must still be disabled after - torch.set_grad_enabled(False) + original_grad_state = torch.is_grad_enabled() try: + # grad enabled before call → must still be enabled after + torch.set_grad_enabled(True) analyzer(data) - assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call" + assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call" + + # grad disabled before call → must still be disabled after + torch.set_grad_enabled(False) + analyzer(data) + assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call" finally: - torch.set_grad_enabled(True) # always restore for subsequent tests + torch.set_grad_enabled(original_grad_state)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/apps/test_auto3dseg.py` around lines 577 - 599, The test test_analyzer_grad_state_restored_after_call mutates global grad mode twice and only restores to True in the inner finally; instead capture the original grad state at the start (orig = torch.is_grad_enabled()), wrap both subcases in a single try/finally, run the analyzer calls (analyzer(data)) with appropriate torch.set_grad_enabled(True/False) for each subcase, and always restore torch.set_grad_enabled(orig) in the outer finally so the global grad state is returned to its original value; locate the test and modify its logic around ImageStats analyzer invocation and torch.set_grad_enabled/is_grad_enabled usage.
🧹 Nitpick comments (1)
tests/apps/test_auto3dseg.py (1)
181-185: Docstrings are not in Google-style sections.Most new docstrings are one-line summaries only. For modified definitions, use Google-style sections (
Args,Returns,Raises) where applicable.As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."
Also applies to: 194-194, 211-211, 229-229, 245-245, 260-260, 276-276, 289-289, 301-301, 316-316, 338-338, 361-361, 386-386, 431-431, 445-445, 460-460, 484-484, 510-510, 536-536, 564-568, 578-582, 601-601
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/apps/test_auto3dseg.py` around lines 181 - 185, Several test methods (starting with setUp in tests/apps/test_auto3dseg.py and the other modified definitions noted) have one-line docstrings; update each to Google-style docstrings with proper sections (Args, Returns, Raises) as applicable. For setUp and other test helper methods (e.g., methods around lines 194, 211, 229, etc.), add an Args section describing parameters (if any), a Returns section only if the function returns a value, and a Raises section for expected exceptions; keep the initial one-line summary and then include the detailed sections following Google docstring conventions to satisfy the coding guidelines. Ensure each docstring references the function name (setUp and the other listed test functions) and documents variables, return values, and possible exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/apps/test_auto3dseg.py`:
- Around line 577-599: The test test_analyzer_grad_state_restored_after_call
mutates global grad mode twice and only restores to True in the inner finally;
instead capture the original grad state at the start (orig =
torch.is_grad_enabled()), wrap both subcases in a single try/finally, run the
analyzer calls (analyzer(data)) with appropriate
torch.set_grad_enabled(True/False) for each subcase, and always restore
torch.set_grad_enabled(orig) in the outer finally so the global grad state is
returned to its original value; locate the test and modify its logic around
ImageStats analyzer invocation and torch.set_grad_enabled/is_grad_enabled usage.
---
Nitpick comments:
In `@tests/apps/test_auto3dseg.py`:
- Around line 181-185: Several test methods (starting with setUp in
tests/apps/test_auto3dseg.py and the other modified definitions noted) have
one-line docstrings; update each to Google-style docstrings with proper sections
(Args, Returns, Raises) as applicable. For setUp and other test helper methods
(e.g., methods around lines 194, 211, 229, etc.), add an Args section describing
parameters (if any), a Returns section only if the function returns a value, and
a Raises section for expected exceptions; keep the initial one-line summary and
then include the detailed sections following Google docstring conventions to
satisfy the coding guidelines. Ensure each docstring references the function
name (setUp and the other listed test functions) and documents variables, return
values, and possible exceptions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 61cade9e-9b1f-4191-9cfe-f8ad2e4d88bf
📒 Files selected for processing (1)
tests/apps/test_auto3dseg.py
Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
… ImageStats.__call__ docstring The Raises section was missing the ValueError raised when a pre-computed nda_croppeds value fails type or length validation. Added the entry to keep the docstring consistent with the actual exception behaviour. Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
Problem
ImageStats.__call__crashes withUnboundLocalError: local variable 'nda_croppeds' referenced before assignmentwhen the caller pre-populates"nda_croppeds"in the input dict.Root Cause
nda_croppedswas only assigned inside theif "nda_croppeds" not in d:branch, but used unconditionally at lines 267 and 279 regardless of which branch was taken.Fix
Added the missing
elsebranch to read the pre-computed value from the dict. Also wrapped the method body intry/finallyto guaranteetorch.set_grad_enabledis always restored on exit, even if an exception is raised mid-computation.Testing
Types of changes