Skip to content

Fix continue for sphere#513

Open
dodu94 wants to merge 5 commits into
developingfrom
fix-continue-for-Sphere
Open

Fix continue for sphere#513
dodu94 wants to merge 5 commits into
developingfrom
fix-continue-for-Sphere

Conversation

@dodu94
Copy link
Copy Markdown
Member

@dodu94 dodu94 commented Apr 29, 2026

Fix #507

The continue option for jade now works also for the Sphere-like benchmarks. Moreover, the capability to run the continue with a global job (previously bugged for all type of benchmarks) has been added.

Suitable CI tests have been added and also a manual one using the GLOBAL_JOB option has been performed

Summary by CodeRabbit

  • New Features

    • Continued-run processing can now launch aggregated global job submissions when configured.
  • Improvements

    • Aggregates continuation commands for multi-step benchmarks for more reliable execution.
    • Better handling of global job mode and test-mode command generation.
    • Added mock input/testing support to allow end-to-end continuation tests without real input files.
    • Expanded tests and sample benchmark inputs for Sphere continuation scenarios.

Davide Laghi and others added 3 commits April 29, 2026 10:02
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 29, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 33683ff1-b6a6-4588-83a9-191c79e80ae8

📥 Commits

Reviewing files that changed from the base of the PR and between 1f5be40 and 245e825.

📒 Files selected for processing (3)
  • src/jade/run/benchmark.py
  • tests/dummy_structure/simulations/_mcnp_-_FENDL 3.2c_/Dummy_continue/Dummy_continue2/Dummy2.i
  • tests/run/test_benchmark.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/jade/run/benchmark.py

Walkthrough

Refactors continue-run to produce aggregated structured command tuples (command list + working folder) using mock inputs to avoid template I/O; JadeApp now extends/aggregated those tuples and, when in RunMode.GLOBAL_JOB, calls launch_global_jobs(commands, env_vars, test=testing) and returns the submission result.

Changes

App Continue-Run Handler

Layer / File(s) Summary
Continue-run aggregator & launcher
src/jade/app/app.py
JadeApp.continue_run now treats each benchmark.continue_run(...) result as an iterable and uses commands.extend(...). If run_mode == RunMode.GLOBAL_JOB it calls launch_global_jobs(commands, self.run_cfg.env_vars, test=testing) and returns that submission result; otherwise returns the aggregated commands list.

Benchmark core, SingleRun factory, and tests

Layer / File(s) Summary
Continue-run command shape & generation
src/jade/run/benchmark.py
BenchmarkRun.continue_run and _get_continue_run_command were refactored to return structured per-run tuples (command list, folder). They now create SingleRun instances with mock_input=True and pass testing through so per-run outputs are produced without reading template folders.
SingleRunFactory & MockInput
src/jade/run/benchmark.py
SingleRunFactory.create(..., mock_input: bool = False) added; when mock_input=True a new MockInput is used (infers .i name or folder basename) with no-op translate/set_nps to avoid filesystem access.
Job submission behavior in SingleRun.run
src/jade/run/benchmark.py
SingleRun.run in JOB_SUBMISSION mode now branches on test: testing runs call _submit_job(..., test=test) and return the generated job-script/command; non-testing submissions call _submit_job without test and do not return the script.
Tests & fixtures updated
tests/app/test_app.py, tests/run/test_benchmark.py, tests/dummy_structure/.../Sphere/*
App tests updated to expect successful continue_run(testing=True) and to validate SBATCH/script content; new Sphere dummy inputs/metadata added; benchmark tests updated to assert the new tuple-shaped command outputs (accessing nested command elements).

Sequence Diagram

sequenceDiagram
    participant App as JadeApp
    participant BRun as BenchmarkRun
    participant Factory as SingleRunFactory
    participant MockInp as MockInput
    participant SRun as SingleRun
    participant JobMgr as JobLauncher

    App->>BRun: continue_run(testing)
    activate BRun
    BRun->>BRun: _get_continue_run_command(..., testing)
    BRun->>Factory: create(..., mock_input=True)
    activate Factory
    Factory->>MockInp: instantiate (no file I/O)
    Factory-->>BRun: SingleRun
    deactivate Factory
    BRun->>SRun: run(env_vars, folder, continue_run=True, test=testing)
    activate SRun
    SRun-->>BRun: list[tuple[list[str], Path]]
    deactivate SRun
    BRun-->>App: aggregated commands (list of tuples)
    deactivate BRun

    App->>App: commands.extend(each_result)
    alt RunMode == GLOBAL_JOB
        App->>JobMgr: launch_global_jobs(commands, env_vars, test=testing)
        activate JobMgr
        JobMgr-->>App: submission result
        deactivate JobMgr
    else
        App-->>App: return aggregated commands
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

  • JADE-V-V/JADE#462: Related refactor of src/jade/run/benchmark.py and job-launching mechanics overlapping continue-run and global-job handling.
  • JADE-V-V/JADE#458: Related changes to continue-run aggregation and GLOBAL_JOB submission invocation.

Suggested reviewers

  • alexvalentine94

Poem

🐰 I hop through mocks and stubbed delight,
No template folders in my sight.
Commands now gather, neat and bright,
GLOBAL jobs launched into the night. 🚀

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Fix continue for sphere' is vague and generic; it references the bug area but lacks specificity about what was changed or how it was fixed. Consider a more descriptive title that explains the fix, such as 'Refactor continue_run to use MockInput for template-independent behavior' or 'Fix continue_run to avoid accessing benchmark_templates directory'.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed The PR successfully addresses #507 by refactoring continue_run to use MockInput, eliminating the need to access benchmark_templates and allowing continues to work without real template files.
Out of Scope Changes check ✅ Passed All changes directly support the core fix: MockInput enables template-independent continues, refactored methods properly return structured data for caller flexibility, and tests verify the fix works for both standard and GLOBAL_JOB modes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix-continue-for-Sphere

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 81.39535% with 8 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/jade/run/benchmark.py 80.00% 8 Missing ⚠️
Files with missing lines Coverage Δ
src/jade/app/app.py 82.69% <100.00%> (+0.16%) ⬆️
src/jade/run/benchmark.py 86.29% <80.00%> (-0.71%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/jade/run/benchmark.py (1)

482-483: ⚠️ Potential issue | 🟡 Minor

Stabilize continue-run ordering.

os.listdir() is filesystem-order dependent, so the returned command list and aggregated global-job script order are nondeterministic. That makes the new Sphere tests flaky and the generated submission scripts harder to reason about.

🛠️ Suggested fix
-        for single_run_folder in os.listdir(benchmark_root):
+        for single_run_folder in sorted(os.listdir(benchmark_root)):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/jade/run/benchmark.py` around lines 482 - 483, The loop over
os.listdir(benchmark_root) yields non-deterministic ordering; change the
iteration to use a stable, deterministic order (e.g., iterate over
sorted(os.listdir(benchmark_root)) or otherwise sort the entries) so that
single_run_folder and derived single_run_root are processed consistently and the
aggregated global-job script order is stable; update the for loop that currently
reads "for single_run_folder in os.listdir(benchmark_root):" to iterate over a
sorted list instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/jade/run/benchmark.py`:
- Around line 181-199: The JOB_SUBMISSION branch is incorrectly guarded by
continue_run so normal BenchmarkRun runs never call _submit_job; remove the
continue_run check so that when env_vars.run_mode == RunMode.JOB_SUBMISSION you
always call self._submit_job(...) for the regular path, and preserve the
existing test behavior (when test is True call _submit_job(..., test=test) and
return the command). Update the conditional logic around env_vars.run_mode,
_submit_job, continue_run, and test (used by SingleRun.run) to ensure job
submission occurs for non-test runs as well.
- Around line 415-418: The constructor currently sets self.name from the
template folder basename which is wrong for MCNP continues; instead locate the
actual MCNP input file in template_folder (e.g., search for a file with
extension ".i" or case-insensitive variants), set self.inp to that input Path,
and derive self.name from self.inp.stem so generated continue commands target
the real input file; update the __init__ logic that uses self.template_folder,
self.inp, and self.name accordingly.

In `@tests/dummy_structure/simulations/_mcnp_-_continue`
sphere_/Sphere/Sphere_m101/metadata.json:
- Around line 2-3: The metadata field benchmark_name in the Sphere fixture is
incorrect (currently "Oktavian"); update the benchmark_name value in the
Sphere_m101 fixture's metadata.json so it correctly identifies the Sphere
benchmark (replace "Oktavian" with the expected "Sphere" or the project's
canonical benchmark name), leaving other fields like benchmark_version untouched
to ensure the fixture matches its directory/intent.

---

Outside diff comments:
In `@src/jade/run/benchmark.py`:
- Around line 482-483: The loop over os.listdir(benchmark_root) yields
non-deterministic ordering; change the iteration to use a stable, deterministic
order (e.g., iterate over sorted(os.listdir(benchmark_root)) or otherwise sort
the entries) so that single_run_folder and derived single_run_root are processed
consistently and the aggregated global-job script order is stable; update the
for loop that currently reads "for single_run_folder in
os.listdir(benchmark_root):" to iterate over a sorted list instead.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 661af19f-e152-43a8-90c1-fb3d5070dbeb

📥 Commits

Reviewing files that changed from the base of the PR and between ba664bd and 4960d76.

📒 Files selected for processing (8)
  • src/jade/app/app.py
  • src/jade/run/benchmark.py
  • tests/app/test_app.py
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/Sphere_dummy1.i
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/metadata.json
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/Sphere_M101_.i
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/metadata.json
  • tests/run/test_benchmark.py

Comment thread src/jade/run/benchmark.py
Comment thread src/jade/run/benchmark.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/jade/run/benchmark.py (1)

412-416: ⚠️ Potential issue | 🟠 Major

Derive MockInput.name from the real input file, not the folder.

This still assumes the run directory name matches the MCNP/D1S input stem. For Sphere-style generated cases that is not guaranteed, so continue can still emit i=<folder>.i for a file that does not exist.

🛠️ Suggested fix
 class MockInput:
@@
     def __init__(self, template_folder: PathLike, lib: Library):
-        self.template_folder = template_folder
-        self.inp = None
-        self.name = os.path.basename(template_folder)
+        folder = Path(template_folder)
+        self.template_folder = folder
+        input_files = sorted([*folder.glob("*.i"), *folder.glob("*.I")])
+        self.inp = input_files[0] if input_files else None
+        self.name = self.inp.stem if self.inp is not None else folder.name
         self.lib = lib
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/jade/run/benchmark.py` around lines 412 - 416, The constructor for
MockInput sets self.name to the template folder basename which can be wrong for
Sphere-style cases; change MockInput.__init__ to derive self.name from the
actual input file stem (use Path(self.inp).stem or locate the MCNP/D1S input
file inside template_folder and use its stem) instead of
os.path.basename(template_folder), and ensure self.inp is set/checked before
deriving the name so continue emits the correct i=<file>.i value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/jade/run/benchmark.py`:
- Around line 412-416: The constructor for MockInput sets self.name to the
template folder basename which can be wrong for Sphere-style cases; change
MockInput.__init__ to derive self.name from the actual input file stem (use
Path(self.inp).stem or locate the MCNP/D1S input file inside template_folder and
use its stem) instead of os.path.basename(template_folder), and ensure self.inp
is set/checked before deriving the name so continue emits the correct i=<file>.i
value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4d331242-a678-4c8d-a930-77c1b794d8f3

📥 Commits

Reviewing files that changed from the base of the PR and between 4960d76 and 1f5be40.

📒 Files selected for processing (1)
  • src/jade/run/benchmark.py

@dodu94 dodu94 requested a review from alexvalentine94 April 29, 2026 15:05
@dodu94
Copy link
Copy Markdown
Member Author

dodu94 commented Apr 29, 2026

missing coverage is related to edge cases that are not worrysome

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] - OpenMC Sphere continue run issue

1 participant