This repository contains the code and data for reproducing the results in:
CodeStruct: Code Agents over Structured Action Spaces Myeongsoo Kim, Joe Hsu, Dingmin Wang, Shweta Garg, Varun Kumar, Murali Krishna Ramanathan ACL 2026 (Main Conference)
This code is being released solely for academic and scientific reproducibility purposes, in support of the methods and findings described in the associated publication. Pull requests are not being accepted in order to maintain the code exactly as it was used in the paper.
CodeStruct reframes the codebase as a structured action space where LLM-based agents operate on named AST entities rather than text spans. It provides two structure-aware primitives:
readCode— retrieves complete syntactic units (functions, classes) via AST selectors, with fuzzy matching and adaptive file summarizationeditCode— applies syntax-validated transformations (insert, replace, remove) to named program entities, with automatic indentation and syntax checking
Evaluated on SWE-Bench Verified (500 tasks) and CodeAssistBench (135 tasks) across six LLMs, CodeStruct improves Pass@1 accuracy by 1.2–5.0% while reducing token consumption by 12–38% for most models.
CodeStruct/
├── setup.sh # One-command setup (creates venv, installs deps, runs tests)
├── requirements.txt # Pinned dependency versions
├── REPRODUCE.md # Detailed reproduction instructions
├── LICENSE # CC-BY-NC-4.0 license
├── NOTICE # Third-party attribution
├── SWE-agent/ # SWE-agent with CodeStruct tools
│ ├── config/ # Experiment configurations
│ │ ├── default_with_readcode_batch.yaml # CodeStruct config
│ │ └── default.yaml # Baseline config
│ ├── tools/
│ │ ├── ast_read/ # readCode — AST-based code reader
│ │ ├── ast_edit/ # editCode — AST-based code writer
│ │ └── str_replace/ # String replacement (supporting tool)
│ ├── sweagent/ # SWE-agent core package
│ └── pyproject.toml # Package setup
├── codeassistbench/ # CodeAssistBench evaluation framework
│ ├── cab_evaluation/ # Agents, prompts, workflows
│ │ └── utils/openhands_tools.py # OpenHands integration of AST tools
│ ├── dataset/ # 135 benchmark tasks
│ └── run_experiments.sh # Experiment runner
├── results/ # Pre-computed experiment results
│ ├── SWE-Bench/ # 12 prediction files (6 models × 2 configs)
│ └── CodeAssistBench/ # 12 result files (6 models × 2 configs)
└── paper/ # Paper source (main.tex)
# Clone the repository
git clone <repo-url>
cd CodeStruct
# Run automated setup (creates Python 3.11 venv, installs everything, tests tools)
chmod +x setup.sh
./setup.sh
# Activate the environment
source venv/bin/activateThe setup script will:
- Find Python >= 3.11 on your system
- Create a virtual environment at
venv/ - Install tree-sitter with pinned versions (0.21.3 + tree-sitter-languages 1.10.2)
- Install SWE-agent v1.1.0 and swebench
- Verify all components including AST tool tests
# Set API key
export OPENAI_API_KEY=sk-your-key-here
# Run on 5 SWE-Bench instances (quick test, ~10 min)
cd SWE-agent
python -m sweagent run-batch \
--instances.type swe_bench \
--instances.subset verified \
--instances.split test \
--config config/default_with_readcode_batch.yaml \
--agent.model.name gpt-5-mini \
--agent.model.per_instance_cost_limit 3.00 \
--agent.model.top_p null \
--agent.model.temperature 1 \
--instances.slice :5 \
--num_workers 5python -m swebench.harness.run_evaluation \
--predictions_path trajectories/$USER/*/preds.json \
-d princeton-nlp/SWE-bench_Verified \
-s test \
--run_id test_run \
--max_workers 15For full reproduction instructions across all models and benchmarks, see REPRODUCE.md.
All results reported in the paper are included in the results/ directory.
| Model | Baseline Pass@1 | CodeStruct Pass@1 | Δ |
|---|---|---|---|
| GPT-5 | 66.0% | 67.2% | +1.2 |
| GPT-5-mini | 60.4% | 62.0% | +1.6 |
| GPT-5-nano | 19.6% | 40.4% | +20.8 |
| Qwen3-Coder-480B | 61.2% | 66.2% | +5.0 |
| Qwen3-32B | 14.8% | 16.0% | +1.2 |
| Qwen3-8B | 13.2% | 13.0% | -0.2 |
| Model | Baseline Accuracy | CodeStruct Accuracy | Δ |
|---|---|---|---|
| GPT-5 | 53.3% | 54.1% | +0.8 |
| GPT-5-mini | 51.1% | 51.9% | +0.8 |
| GPT-5-nano | 46.7% | 48.1% | +1.4 |
| Qwen3-Coder-480B | 31.1% | 31.9% | +0.8 |
| Qwen3-32B | 15.6% | 20.0% | +4.4 |
| Qwen3-8B | 13.3% | 14.1% | +0.8 |
- Python >= 3.11
- Docker (for SWE-Bench evaluation)
- 16GB+ RAM recommended
- API keys (OpenAI or AWS Bedrock)
- No GPU required
If you find this work useful, please cite:
@misc{kim2026codestructcodeagentsstructured,
title={CODESTRUCT: Code Agents over Structured Action Spaces},
author={Myeongsoo Kim and Joe Hsu and Dingmin Wang and Shweta Garg and Varun Kumar and Murali Krishna Ramanathan},
year={2026},
eprint={2604.05407},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2604.05407},
}This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC-BY-NC-4.0).
See CONTRIBUTING.md for more information.