Skip to content

Fix #8802: read nda_croppeds from input dict when pre-computed#8809

Open
williams145 wants to merge 7 commits intoProject-MONAI:devfrom
williams145:fix/issue-8802-image-stats-unbound-nda-croppeds
Open

Fix #8802: read nda_croppeds from input dict when pre-computed#8809
williams145 wants to merge 7 commits intoProject-MONAI:devfrom
williams145:fix/issue-8802-image-stats-unbound-nda-croppeds

Conversation

@williams145
Copy link
Copy Markdown
Contributor

@williams145 williams145 commented Apr 8, 2026

Problem

ImageStats.__call__ crashes with UnboundLocalError: local variable 'nda_croppeds' referenced before assignment when the caller pre-populates "nda_croppeds" in the input dict.

Root Cause

nda_croppeds was only assigned inside the if "nda_croppeds" not in d: branch, but used unconditionally at lines 267 and 279 regardless of which branch was taken.

Fix

Added the missing else branch to read the pre-computed value from the dict. Also wrapped the method body in try/finally to guarantee torch.set_grad_enabled is always restored on exit, even if an exception is raised mid-computation.

Testing

analyzer = ImageStats(image_key="image")
data = {"image": torch.rand(1, 10, 10, 10), "nda_croppeds": [np.random.rand(8, 8, 8)]}
result = analyzer(data)  # previously raised UnboundLocalError

Types of changes

  • Non-breaking change (bug fix or new feature that would not break existing functionality)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New tests added
  • Integration tests run locally
  • Quick tests run locally
  • Docstrings updated
  • Documentation updated

…mputed

ImageStats.__call__ crashed with UnboundLocalError when a caller
pre-populated "nda_croppeds" in the input dict. The variable was only
assigned in the if-branch but referenced unconditionally on both paths.

Added the missing else-branch to read the pre-computed value from the
dict, and wrapped the method body in try/finally to guarantee grad state
is always restored on exit.

Fixes Project-MONAI#8802
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f72a1385-682d-416d-81f6-4061f69ded29

📥 Commits

Reviewing files that changed from the base of the PR and between 0e3f0c4 and e19bdaf.

📒 Files selected for processing (1)
  • monai/auto3dseg/analyzer.py

📝 Walkthrough

Walkthrough

ImageStats.call was refactored to remove upfront inline validation and instead perform channel extraction, optional reuse or per-channel computation of nda_croppeds, report construction/validation, and assignment to d[self.stats_name] inside a try block, with a finally that restores PyTorch's prior grad-enabled state. If nda_croppeds is provided it must be a list/tuple whose length matches the image channel count or a ValueError is raised. Two tests were added: one for precomputed nda_croppeds and one ensuring grad-enabled state is restored after the call.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title directly references the issue being fixed (#8802) and clearly describes the main change: reading nda_croppeds from the input dict when pre-computed.
Description check ✅ Passed The description includes Problem, Root Cause, Fix, and Testing sections with code example. It covers the core issue and solution, though the Types of changes section format deviates slightly from the template structure.
Docstring Coverage ✅ Passed Docstring coverage is 95.83% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 260-264: The code uses caller-supplied d["nda_croppeds"] without
validation which can yield incorrect per-channel stats; in the block handling
nda_croppeds (the if/else around "nda_croppeds" and get_foreground_image),
verify that d["nda_croppeds"] is a sequence with the same length as ndas and
that each element is a valid array-like (e.g., numpy ndarray) with expected
shape/dtype; if the check fails, fall back to recomputing nda_croppeds =
[get_foreground_image(nda) for nda in ndas]; update the branch that assigns
nda_croppeds so it validates d["nda_croppeds"] before using it and documents the
fallback behavior.

In `@tests/apps/test_auto3dseg.py`:
- Around line 554-570: The test test_analyzer_grad_state_restored_after_call
currently mutates global torch grad mode without guaranteeing restoration on
exceptions; wrap the two analyzer(data) calls in a try/finally: capture the
original state with orig = torch.is_grad_enabled(), set the required state for
each subcase with torch.set_grad_enabled(True/False), call analyzer(data),
assert the state, and in the finally restore torch.set_grad_enabled(orig) so the
global grad mode is always returned (update/remove the trailing
torch.set_grad_enabled(True) in favor of the finally restore).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 967bb49c-5419-4036-be0a-b25f2e9d6a47

📥 Commits

Reviewing files that changed from the base of the PR and between 8d39519 and 2774923.

📒 Files selected for processing (2)
  • monai/auto3dseg/analyzer.py
  • tests/apps/test_auto3dseg.py

Comment on lines +554 to +570
def test_analyzer_grad_state_restored_after_call(self):
# Verify that ImageStats.__call__ always restores the grad-enabled state it found
# on entry, regardless of which state that was.
analyzer = ImageStats(image_key="image")
image = torch.rand(1, 10, 10, 10)
data = {"image": MetaTensor(image)}

# grad enabled before call → must still be enabled after
torch.set_grad_enabled(True)
analyzer(data)
assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"

# grad disabled before call → must still be disabled after
torch.set_grad_enabled(False)
analyzer(data)
assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
torch.set_grad_enabled(True) # restore for subsequent tests
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make grad-state cleanup exception-safe in the test.

The test mutates global grad mode; restore the original state in finally to prevent cross-test leakage on failures.

Proposed fix
     def test_analyzer_grad_state_restored_after_call(self):
         # Verify that ImageStats.__call__ always restores the grad-enabled state it found
         # on entry, regardless of which state that was.
         analyzer = ImageStats(image_key="image")
         image = torch.rand(1, 10, 10, 10)
         data = {"image": MetaTensor(image)}
-
-        # grad enabled before call → must still be enabled after
-        torch.set_grad_enabled(True)
-        analyzer(data)
-        assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
-
-        # grad disabled before call → must still be disabled after
-        torch.set_grad_enabled(False)
-        analyzer(data)
-        assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
-        torch.set_grad_enabled(True)  # restore for subsequent tests
+        original_grad_state = torch.is_grad_enabled()
+        try:
+            # grad enabled before call → must still be enabled after
+            torch.set_grad_enabled(True)
+            analyzer(data)
+            assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+
+            # grad disabled before call → must still be disabled after
+            torch.set_grad_enabled(False)
+            analyzer(data)
+            assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+        finally:
+            torch.set_grad_enabled(original_grad_state)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/apps/test_auto3dseg.py` around lines 554 - 570, The test
test_analyzer_grad_state_restored_after_call currently mutates global torch grad
mode without guaranteeing restoration on exceptions; wrap the two analyzer(data)
calls in a try/finally: capture the original state with orig =
torch.is_grad_enabled(), set the required state for each subcase with
torch.set_grad_enabled(True/False), call analyzer(data), assert the state, and
in the finally restore torch.set_grad_enabled(orig) so the global grad mode is
always returned (update/remove the trailing torch.set_grad_enabled(True) in
favor of the finally restore).

Copy link
Copy Markdown

@atharvajoshi01 atharvajoshi01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two fixes in one: reading precomputed nda_croppeds from the dict (the actual bug) and wrapping the whole block in try/finally to restore grad state on exception. Both are correct. The else branch on line ~261 was the missing piece from the original code.

…h@gmail.com>

I, UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>, hereby add my Signed-off-by to this commit: 2774923

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
…and add test docstrings

- Validate pre-computed nda_croppeds is a list with one entry per channel,
  raising ValueError with a clear message if not
- Convert inline comments to docstrings on test methods to satisfy docstring
  coverage CI check
- Wrap grad-disabled test leg in try/finally so global state is always restored

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
@williams145
Copy link
Copy Markdown
Contributor Author

The quick-py3 (macOS-latest) failure is a pre-existing infrastructure issue, pytype 2024.4.11 fails to install on the macOS runner due to a missing pybind11 dependency, which is unrelated to the changes in this PR. The same failure appears on other open PRs against dev for the same reason.

@williams145
Copy link
Copy Markdown
Contributor Author

The quick-py3 (macOS-latest) failure is a pre-existing infrastructure issue, pytype 2024.4.11 fails to install on the macOS runner due to a missing pybind11 dependency, which is unrelated to the changes in this PR. The same failure appears on other open PRs against dev for the same reason.

Note: PR #8803 by @bluehyena covers overlapping ground on issue #8802. The key difference in this PR is explicit validation of the pre-computed nda_croppeds value, wrong type or mismatched channel count raises a clear ValueError rather than propagating silently to a confusing downstream error.

…t in ImageStats.__call__

The docstring was positioned after input validation code that was added
in a prior commit, preventing Python from recognising it as a docstring
and causing the docstring coverage check to fail at 40%.

Moves the docstring to the first statement of __call__ and removes the
out-of-scope input validation block. The core fix (else branch reading
nda_croppeds from the input dict with type/length validation, and the
try/finally grad-state guard) is unchanged.

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 248-253: Update the function/method docstring that validates
nda_croppeds/ndas to include ValueError in the Raises section: the code now
raises ValueError when nda_croppeds is not a list/tuple matching the length of
ndas (see variables nda_croppeds and ndas in the validation block), so add a
short entry documenting this ValueError alongside the existing RuntimeError
entry in the Google-style "Raises" section.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 61824ad3-8244-4f80-b20f-a13365f70ad9

📥 Commits

Reviewing files that changed from the base of the PR and between e80119c and 8a568c9.

📒 Files selected for processing (1)
  • monai/auto3dseg/analyzer.py

…DataAnalyzer methods

Docstring coverage was at 50% because the existing test methods in
TestDataAnalyzer had no docstrings. Added one-line docstrings to all
20 undocumented methods and the class itself to satisfy the 80%
coverage threshold.

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/apps/test_auto3dseg.py (1)

577-599: ⚠️ Potential issue | 🟡 Minor

Restore grad mode to the original entry state in one outer finally.

Line 587 and Line 593 mutate global grad mode, but cleanup only restores to True and only for the second subcase. Use one outer try/finally and restore the original state.

Proposed fix
     def test_analyzer_grad_state_restored_after_call(self):
         """Verify ImageStats restores torch grad-enabled state on both normal and disabled entry.

         Checks that the try/finally guard correctly restores the state regardless of
         whether grad was enabled or disabled before the call.
         """
         analyzer = ImageStats(image_key="image")
         image = torch.rand(1, 10, 10, 10)
         data = {"image": MetaTensor(image)}

-        # grad enabled before call → must still be enabled after
-        torch.set_grad_enabled(True)
-        analyzer(data)
-        assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
-
-        # grad disabled before call → must still be disabled after
-        torch.set_grad_enabled(False)
+        original_grad_state = torch.is_grad_enabled()
         try:
+            # grad enabled before call → must still be enabled after
+            torch.set_grad_enabled(True)
             analyzer(data)
-            assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+            assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+
+            # grad disabled before call → must still be disabled after
+            torch.set_grad_enabled(False)
+            analyzer(data)
+            assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
         finally:
-            torch.set_grad_enabled(True)  # always restore for subsequent tests
+            torch.set_grad_enabled(original_grad_state)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/apps/test_auto3dseg.py` around lines 577 - 599, The test
test_analyzer_grad_state_restored_after_call mutates global grad mode twice and
only restores to True in the inner finally; instead capture the original grad
state at the start (orig = torch.is_grad_enabled()), wrap both subcases in a
single try/finally, run the analyzer calls (analyzer(data)) with appropriate
torch.set_grad_enabled(True/False) for each subcase, and always restore
torch.set_grad_enabled(orig) in the outer finally so the global grad state is
returned to its original value; locate the test and modify its logic around
ImageStats analyzer invocation and torch.set_grad_enabled/is_grad_enabled usage.
🧹 Nitpick comments (1)
tests/apps/test_auto3dseg.py (1)

181-185: Docstrings are not in Google-style sections.

Most new docstrings are one-line summaries only. For modified definitions, use Google-style sections (Args, Returns, Raises) where applicable.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 194-194, 211-211, 229-229, 245-245, 260-260, 276-276, 289-289, 301-301, 316-316, 338-338, 361-361, 386-386, 431-431, 445-445, 460-460, 484-484, 510-510, 536-536, 564-568, 578-582, 601-601

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/apps/test_auto3dseg.py` around lines 181 - 185, Several test methods
(starting with setUp in tests/apps/test_auto3dseg.py and the other modified
definitions noted) have one-line docstrings; update each to Google-style
docstrings with proper sections (Args, Returns, Raises) as applicable. For setUp
and other test helper methods (e.g., methods around lines 194, 211, 229, etc.),
add an Args section describing parameters (if any), a Returns section only if
the function returns a value, and a Raises section for expected exceptions; keep
the initial one-line summary and then include the detailed sections following
Google docstring conventions to satisfy the coding guidelines. Ensure each
docstring references the function name (setUp and the other listed test
functions) and documents variables, return values, and possible exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/apps/test_auto3dseg.py`:
- Around line 577-599: The test test_analyzer_grad_state_restored_after_call
mutates global grad mode twice and only restores to True in the inner finally;
instead capture the original grad state at the start (orig =
torch.is_grad_enabled()), wrap both subcases in a single try/finally, run the
analyzer calls (analyzer(data)) with appropriate
torch.set_grad_enabled(True/False) for each subcase, and always restore
torch.set_grad_enabled(orig) in the outer finally so the global grad state is
returned to its original value; locate the test and modify its logic around
ImageStats analyzer invocation and torch.set_grad_enabled/is_grad_enabled usage.

---

Nitpick comments:
In `@tests/apps/test_auto3dseg.py`:
- Around line 181-185: Several test methods (starting with setUp in
tests/apps/test_auto3dseg.py and the other modified definitions noted) have
one-line docstrings; update each to Google-style docstrings with proper sections
(Args, Returns, Raises) as applicable. For setUp and other test helper methods
(e.g., methods around lines 194, 211, 229, etc.), add an Args section describing
parameters (if any), a Returns section only if the function returns a value, and
a Raises section for expected exceptions; keep the initial one-line summary and
then include the detailed sections following Google docstring conventions to
satisfy the coding guidelines. Ensure each docstring references the function
name (setUp and the other listed test functions) and documents variables, return
values, and possible exceptions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 61cade9e-9b1f-4191-9cfe-f8ad2e4d88bf

📥 Commits

Reviewing files that changed from the base of the PR and between 8a568c9 and 0e3f0c4.

📒 Files selected for processing (1)
  • tests/apps/test_auto3dseg.py

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
… ImageStats.__call__ docstring

The Raises section was missing the ValueError raised when a pre-computed
nda_croppeds value fails type or length validation. Added the entry to
keep the docstring consistent with the actual exception behaviour.

Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants