Skip to content

feat: add bench-ai CLI for natural language bench command help#1710

Open
younis-ali wants to merge 3 commits intofrappe:developfrom
younis-ali:feat/bench-ai
Open

feat: add bench-ai CLI for natural language bench command help#1710
younis-ali wants to merge 3 commits intofrappe:developfrom
younis-ali:feat/bench-ai

Conversation

@younis-ali
Copy link
Copy Markdown

@younis-ali younis-ali commented Apr 21, 2026

Title

feat: add bench-ai CLI for natural-language bench command help


Description

Summary

Adds an optional bench-ai console entry point that turns plain-English questions into structured guidance and a suggested bench commands. It uses the existing requests dependency to call an OpenAI-compatible Chat Completions HTTP API. No new runtime dependencies.

Context is built from this machine’s bench Click CLI (full native subcommand paths + bench --help text) and, when run inside a Frappe bench, the same framework command appendix as bench --help (via get_frappe_help / get_env_frappe_commands). When not in a bench, a short static framework crib plus validation fallback still allows common commands (e.g. doctor) to be suggested conservatively.

Suggested lines are checked against the local CLI where possible; --run refuses invalid or placeholder commands. Default behavior is print-only.

What changed

  • New module: bench/commands/ai.py the CLI, context builder, API client, parsing, validation.
  • New console script in pyproject.toml: bench-ai = "bench.commands.ai:main".
  • Tests: bench/tests/test_ai_command.py (mocked HTTP, parsing, validation, Click path collection).
  • Docs: README section “bench-ai (optional)” under Basic Usage.

Configuration

Env varaibles

  1. OPENAI_API_KEY or BENCH_AI_API_KEY: OPEN API key
  2. BENCH_AI_BASE_URL: Default https://api.openai.com/v1
  3. BENCH_AI_MODEL: Default gpt-4o-mini

How to test

pip install -e .
python -m unittest bench.tests.test_ai_command -v
export OPENAI_API_KEY=...   # optional for live test
bench-ai "back up all sites" --dry-run
bench-ai "what does bench doctor do?"

Dependencies

None added, uses existing requests (~2.32.x per pyproject.toml).

Documentation

README updated with install/setup, flags, and security note (review before --run).

Notes for reviewers

  • Guidelines mention keeping PRs very small; this feature is intentionally one cohesive tool. I’m happy to split (e.g. core CLI + README in one PR, follow-up for crib/validation refinements) if you prefer cascading PRs.
  • Facing strings are English CLI output; I did not wrap them in _() since bench CLI generally doesn’t i18n user messages the same way as Frappe apps—happy to adjust if project policy requires it.

Checklist

  • Tests added / updated
  • README updated
  • CI considerations: unittest module (no pytest required)

Add bench-ai console script (OpenAI-compatible API), local context from
bench --help and optional Frappe help, suggestion validation, and tests.

Signed-off-by: younis-ali <lone.younis1993@gmail.com>
Split build_context and bench_ai into helpers; BENCH_SHELL_PREFIX/BENCH_BINARY;
mock get_api_key in test_missing_api_key so CliRunner env does not leak keys.

Signed-off-by: younis-ali <lone.younis1993@gmail.com>
Use split-based extraction and a max reply length instead of DOTALL (.*?).

Signed-off-by: younis-ali <lone.younis1993@gmail.com>
@sonarqubecloud
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant