Two test modes are provided.
| Mode | VMs | Speed | Use case |
|---|---|---|---|
Single-VM (default) |
1 | fast | quick smoke test, local iteration |
Multi-VM (control / gateway / agent) |
3 | slower | realistic deployment, cross-host WireGuard, failover experiments |
- Windows 11 + Hyper-V enabled
- Vagrant (
winget install Hashicorp.Vagrant) - For multi-VM:
bash,curl,jq,opensslon the host (Git Bash or WSL works)
One Ubuntu 24.04 VM runs Control + Gateway + Agent + origin all together.
cd C:\Users\neo\GitHub\kakuremichi-org\test
vagrant up --provider=hyperv default
vagrant ssh defaultThen inside the VM:
cd /kakuremichi/test && sudo bash run-full-test.sh12 Phase:
| # | What |
|---|---|
| 1 | Go build + -race unit tests (gateway & agent) |
| 2 | Control server (dev:custom, Next.js + WS on :3000) |
| 3 | Admin bootstrap via env or /setup → login → mint admin PAT → register Gateway / Agent / Tunnel with Bearer auth |
| 4 | Create wg0 kernel interface + self-signed TLS cert |
| 5 | Start Agent |
| 6 | Start Gateway (with TLS enabled from the start) |
| 7 | WireGuard tunnel verification (peer listing, IP, ping) |
| 8 | Inbound HTTP proxy (Host: test.local → origin) |
| 9 | Exit Node HTTP + SOCKS5 proxy |
| 10 | Inbound HTTPS via the same cert |
| 11 | Auth/authz (401/403/409 matrix, logout) |
| 12 | DB integrity (FK cascade, concurrent tunnel creation, subnet UNIQUE) |
Three separate VMs, host-side orchestration. Gateway and Agent genuinely cross the network boundary over WireGuard.
One command from an elevated (Administrator) PowerShell:
cd C:\Users\neo\GitHub\kakuremichi-org\test
.\Run-MultiVmTest.ps1 -Up-Up does vagrant up first (skip it on subsequent runs). Per-VM logs
(control / gateway / agent / origin) are dumped into test\logs\ at the
end so you can inspect without another vagrant ssh.
Other flags:
-SkipBuild— skip Go build + unit tests (for fast iteration)
Underneath, the script drives the exact same flow as the bash orchestrator
(multi-vm/run-multi-vm-test.sh) but via PowerShell + curl.exe only — no
WSL / Git Bash needed.
host ──HTTP/HTTPS──┐
▼
┌──────────────────────┐ ┌──────────────────────┐
│ gateway VM │──WG──▶│ agent VM │
│ wg0 kernel │ │ netstack WG │
│ HTTP :8880/:8443 │ │ origin :9000 │
│ ExitNode :8080/:1080│ │ exit proxy 127.0.0.1│
└──────────┬───────────┘ └──────────┬───────────┘
│ WebSocket │ WebSocket
└──────────────┬───────────────┘
▼
┌──────────────────────┐
│ control VM │
│ Next.js + WS :3000 │
└──────────────────────┘
cd C:\Users\neo\GitHub\kakuremichi-org\test
vagrant up --provider=hyperv control gateway agentThis boots three Ubuntu 24.04 VMs. Each takes ~2 minutes. When prompted, use the same Windows credentials for the SMB share on every VM.
bash multi-vm/run-multi-vm-test.shOrchestration steps, executed by the host against each VM via vagrant ssh:
- Detect
hostname -Ifor each VM (CONTROL_IP / GATEWAY_IP / AGENT_IP) build-unit-test.shon gateway + agent in parallel (-race)control-start.shboots Control withADMIN_EMAILenv bootstrap- Host-side curl: login → mint admin PAT → register Gateway (with the detected public IP) / Agent / Tunnel
- Ship a fresh self-signed TLS cert to the gateway VM via
vagrant upload agent-start.sh, thengateway-start.shwg show wg0+ cross-VM ping on gateway- Host-side HTTP + HTTPS through
http://${GATEWAY_IP}:8880/(and :8443) curl --proxyfrom the agent VM → Exit Node HTTP + SOCKS5- Auth/authz matrix + FK cascade test (API only)
Per-VM logs stay on each VM:
vagrant ssh control -c 'tail -50 /tmp/control.log'
vagrant ssh gateway -c 'tail -50 /tmp/gateway.log'
vagrant ssh agent -c 'tail -50 /tmp/agent.log'control-start.sh / gateway-start.sh / agent-start.sh kill leftover
processes up front, so just re-run bash multi-vm/run-multi-vm-test.sh.
vagrant destroy -ftest/
├── Vagrantfile # single + multi-VM definitions
├── run-full-test.sh # single-VM 12-Phase test
├── README.md
└── multi-vm/
├── run-multi-vm-test.sh # HOST-side orchestrator
├── control-start.sh # runs on control VM
├── gateway-start.sh # runs on gateway VM
├── agent-start.sh # runs on agent VM
└── build-unit-test.sh # runs on gateway/agent VM (Go build + -race tests)
- Hyper-V Default Switch is used for all VMs. VM-to-VM traffic goes over the same switch; IPs are assigned via DHCP and detected at test time.
- Only one
gatewayVM is provisioned today. To test multi-gateway failover you can add agateway2block inVagrantfilemirroringgateway, wire it up in the orchestrator, and the agent will already fail over between gateway addresses (seeagent/internal/exitnode/*+ the change inagent/cmd/agent/main.go). - The SMB mount is read/write. Binaries are built inside each VM under
/tmpso SMB latency does not affect the actual run paths.