Make SF direct command surface baseline

This commit is contained in:
Mikael Hugo 2026-05-08 01:34:07 +02:00
parent 6fc054e7c3
commit b5893d1c28
202 changed files with 1623 additions and 1307 deletions

View file

@ -49,7 +49,7 @@ One command. Walk away. Come back to a built project with clean git history.
### Auto-Mode Resilience
- **Credential cooldown recovery** — autonomous mode survives transient 429 rate-limit responses with structured cooldown errors and a bounded retry budget.
- **Fire-and-forget auto start** — auto start is detached from active turns to prevent blocking.
- **Fire-and-forget autonomous start** — autonomous startup is detached from active turns to prevent blocking.
- **Scoped forensics** — stuck-loop forensics are now scoped to auto sessions only, preventing false positives in interactive use.
### TUI Improvements
@ -121,7 +121,7 @@ Full documentation is in the [`docs/`](./docs/) directory:
### User Guides
- **[Getting Started](./docs/user-docs/getting-started.md)** — install, first run, basic usage
- **[Autonomous Mode](./docs/user-docs/autonomous-mode.md)** — autonomous execution deep-dive
- **[Autonomous Mode](./docs/user-docs/autonomous mode.md)** — autonomous execution deep-dive
- **[Configuration](./docs/user-docs/configuration.md)** — all preferences, models, git, and hooks
- **[Custom Models](./docs/user-docs/custom-models.md)** — add custom providers (Ollama, vLLM, LM Studio, proxies)
- **[Token Optimization](./docs/user-docs/token-optimization.md)** — profiles, context compression, complexity routing
@ -247,7 +247,7 @@ Autonomous mode is governed by the Unified Operation Kernel (UOK), not by the LL
4. **Crash recovery** — A lock file tracks the current unit. If the session dies, the next `/sf autonomous` reads the surviving session file, synthesizes a recovery briefing from every tool call that made it to disk, and resumes with full context. Parallel orchestrator state is persisted to disk with PID liveness detection, so multi-worker sessions survive crashes too. Through the machine surface, crashes trigger automatic restart with exponential backoff (default 3 attempts).
5. **Provider error recovery** — Transient provider errors (rate limits, 500/503 server errors, overloaded) auto-resume after a delay. Permanent errors (auth, billing) pause for manual review. The model fallback chain retries transient network errors before switching models.
5. **Provider error recovery** — Transient provider errors (rate limits, 500/503 server errors, overloaded) resume automatically after a delay. Permanent errors (auth, billing) pause for manual review. The model fallback chain retries transient network errors before switching models.
6. **Stuck detection** — A sliding-window detector identifies repeated dispatch patterns (including multi-unit cycles). On detection, it retries once with a deep diagnostic. If it fails again, autonomous mode stops with the exact file it expected.

115
copilot-thoughts.md Normal file
View file

@ -0,0 +1,115 @@
# Copilot CLI Autopilot Notes For SF
Sources checked 2026-05-08:
- GitHub Docs, "Allowing GitHub Copilot CLI to work autonomously"
<https://docs.github.com/en/copilot/concepts/agents/copilot-cli/autopilot>
- GitHub Docs, "GitHub Copilot CLI command reference"
<https://docs.github.com/en/copilot/reference/copilot-cli-reference/cli-command-reference>
- GitHub Changelog, "GitHub Copilot CLI is now generally available"
<https://github.blog/changelog/2026-02-25-github-copilot-cli-is-now-generally-available/>
- GitHub Copilot CLI product page
<https://github.com/features/copilot/cli>
- GitHub Changelog, "Copilot CLI now supports BYOK and local models"
<https://github.blog/changelog/2026-04-07-copilot-cli-now-supports-byok-and-local-models/>
## Useful Pattern
Copilot CLI keeps three concepts separate:
- `--autopilot` controls whether the agent keeps continuing through multiple
model/tool turns until completion, a blocker, interruption, or a continuation
limit.
- `--allow-all` / `--yolo` expands permission to use tools, paths, and URLs.
- `--no-ask-user` suppresses clarifying questions, but does not itself create a
multi-turn continuation loop.
That separation is the part SF should copy.
GitHub's documented programmatic autopilot example is:
```bash
copilot --autopilot --yolo --max-autopilot-continues 10 -p "YOUR PROMPT HERE"
```
So the important shape is not the word "autopilot"; it is the explicit split
between continuation (`--autopilot`), permission expansion (`--yolo` /
`--allow-all`), and a runaway-loop limiter (`--max-autopilot-continues`).
GitHub also presents a strong interactive handoff: plan first, then accept the
plan and build on autopilot. SF's equivalent should be "accept plan and run
autonomously", backed by UOK state rather than a separate mode.
## Copilot CLI Capabilities Worth Tracking
- Plan mode can transition directly into autopilot.
- `/fleet` runs parallel subagents.
- `/remote` supports steering from another device.
- `/tasks` exposes background tasks.
- `/session` exposes session info, checkpoints, files, plans, cleanup, and
pruning.
- `/skills`, `/plugin`, `/mcp`, and `/agent` customize behavior and tool access.
- BYOK/local-model/offline mode exists; built-in subagents inherit the selected
provider configuration.
## SF Competitive Read
Copilot CLI's public autopilot story is polished: plan, approve, continue
without step-by-step approval, cap continuation, steer remotely, inspect tasks
and sessions. SF already has deeper autonomous machinery: UOK policy gates,
DB-backed state, recovery, verification, scheduling, captures, forensics,
projections, and self-reporting.
The gap to close is presentation and control surface clarity, not core
autonomous capability.
## SF Names
SF should not import Copilot's `autopilot` product name. In SF, run control is:
- `manual`
- `assisted`
- `autonomous`
SF permission profile is separate:
- `restricted`
- `normal`
- `trusted`
- `unrestricted`
SF surfaces and encodings are also separate:
- Surface: TUI, CLI, web, editor, machine surface.
- Output format: `text`, `json`, `stream-json`.
- Protocol: RPC, stdio JSON-RPC, ACP, HTTP/RPC, wire.
## Decisions
- Use `/autonomous` for continuous run control.
- Use `sf headless` only for the machine surface command name.
- Use `--autonomous` to chain milestone creation into autonomous mode.
- Reject `--auto`, `--full`, and `--auto-dispatch` instead of silently mapping
them.
- Do not use "autopilot" as SF product copy. Keep it as competitor context
only; the SF product term is autonomous mode.
- Keep question behavior driven by run control and policy gates, not by the word
`headless`.
- Keep permission expansion driven by permission profile, not by autonomous run
control.
## Implementation Pull-Through
- UOK lifecycle records carry `runControl`.
- UOK lifecycle records and execution-policy decisions carry
`permissionProfile`.
- Schedule command state uses `autonomous_dispatch`.
- Human docs describe docs/specs as exports from `.sf`/SQLite working state.
- User-facing planning should offer an obvious "accept plan and run
autonomously" route.
- Status surfaces should make autonomous background work as inspectable as
Copilot's `/tasks` and `/session` surfaces.
- Continuation limits should be explicit in autonomous settings and status.
The target model is simple: same flow, different surfaces; same run-control
names, different permission profiles; same output, different encodings.

View file

@ -72,11 +72,10 @@ Run control describes how far SF continues through the flow before stopping for
`auto` is not a run-control mode. Use **autonomous** for continuous run control; use **assisted** for bounded human-guided progression.
> Implementation note: if product language uses "autopilot", it maps to
> autonomous run control. It must not introduce a separate flow, protocol,
> output format, or compatibility alias. Autopilot means the UOK-governed
> autonomous controller keeps moving until one of its explicit stop conditions
> fires.
> Competitor note: Copilot CLI calls continuous run control autopilot.
> SF does not use that product name. The SF term is autonomous mode,
> and it stays separate from permission profiles, surfaces, protocols,
> and output formats.
UOK kernel records carry `runControl` as a first-class lifecycle field. Workflow phases such as planning, building, verification, and finalization are separate execution stages, not run-control modes.
@ -114,18 +113,13 @@ Markdown under `.sf/` has two roles:
Markdown under `docs/specs/` is a human export for review, navigation, and git history. Generated docs can change; Git records that human-facing history. If SF needs its own operational history, it should store that in `.sf`/DB-backed state. Plans should record any surface, protocol, output-format, run-control, or permission-profile impact explicitly when a milestone changes integration behavior.
Reflection notes, capture files, and session thoughts are input material, not a
parallel backlog. Autonomous mode may triage them, but durable outcomes must
graduate into DB-backed requirements, decisions, knowledge, roadmap rows,
tests, or tracked documentation.
## Source Placement
SF source placement follows the same axis model. New code should extend the owning axis instead of creating parallel trees.
### Core Flow
- `src/resources/extensions/sf/` owns the SF workflow extension: planning tools, UOK/runtime state, `/sf` commands, prompts, templates, doctors, schedule, and DB-backed state.
- `src/resources/extensions/sf/` owns the SF workflow extension: planning tools, UOK/runtime state, `/next` commands, prompts, templates, doctors, schedule, and DB-backed state.
- `src/resources/extensions/` owns bundled extension packages loaded into the runtime.
- `src/resources/agents/`, `src/resources/skills/`, and `src/resources/workflows/` own bundled runtime resources, not independent product flows.

View file

@ -101,7 +101,7 @@ Legacy `schedule.jsonl` files are import-only compatibility inputs. Rows without
## CLI Reference
All commands are invoked as `/sf schedule <subcommand>` in the TUI or `sf schedule <subcommand>` from the shell.
All commands are invoked as `/schedule <subcommand>` in the TUI or `sf schedule <subcommand>` from the shell.
### `sf schedule add`
@ -207,7 +207,7 @@ sf schedule run 01ARZ3ND
On every SF startup, `loader.ts` calls `findDue()` for both project and global scopes. If any items are due, it prints:
```
[forge] N scheduled item(s) due now. Manage: /sf schedule list
[forge] N scheduled item(s) due now. Manage: /schedule list
```
### Machine Snapshot (`sf headless query`)

View file

@ -1,11 +1,11 @@
# Autonomous Mode
Autonomous mode is SF's product-development execution engine for the purpose-to-software compiler. It advances only from structured state: bounded intent, PDD fields, research assumptions, tests or executable evidence, implementation, verification, and recorded outcomes. Run `/sf autonomous`, walk away, come back to built software with clean git history.
Autonomous mode is SF's product-development execution engine for the purpose-to-software compiler. It advances only from structured state: bounded intent, PDD fields, research assumptions, tests or executable evidence, implementation, verification, and recorded outcomes. Run `/autonomous`, walk away, come back to built software with clean git history.
> Terminology: "autopilot" is user-facing shorthand for autonomous mode. It is
> not a second mode, not a looser `auto` compatibility path, and not a permission
> bypass. The same UOK policy, evidence, budget, blocker, and completion gates
> decide how far it continues.
> Terminology: SF uses **autonomous mode** for continuous run control. It is
> not `auto`, not `autopilot`, not `headless`, and not a permission bypass. The
> same UOK policy, evidence, budget, blocker, and completion gates decide how
> far it continues.
## How It Works
@ -64,13 +64,13 @@ When your project has independent milestones, you can run them simultaneously. E
### Crash Recovery
A lock file tracks the current unit. If the session dies, the next `/sf autonomous` reads the surviving session file, synthesizes a recovery briefing from every tool call that made it to disk, and resumes with full context.
A lock file tracks the current unit. If the session dies, the next `/autonomous` reads the surviving session file, synthesizes a recovery briefing from every tool call that made it to disk, and resumes with full context.
**Machine-surface auto-restart (v2.26):** When running `sf headless autonomous`, crashes trigger automatic restart with exponential backoff (5s → 10s → 30s cap, default 3 attempts). Configure with `--max-restarts N`. SIGINT/SIGTERM bypasses restart. Combined with crash recovery, this enables true overnight "run until done" execution. `headless` selects the non-interactive surface; `autonomous` selects run control.
**Machine-surface automatic restart (v2.26):** When running `sf headless autonomous`, crashes trigger automatic restart with exponential backoff (5s → 10s → 30s cap, default 3 attempts). Configure with `--max-restarts N`. SIGINT/SIGTERM bypasses restart. Combined with crash recovery, this enables true overnight "run until done" execution. `headless` selects the non-interactive surface; `autonomous` selects run control.
### Provider Error Recovery
SF classifies provider errors and auto-resumes when safe:
SF classifies provider errors and resumes automatically when safe:
| Error type | Examples | Action |
|-----------|----------|--------|
@ -106,16 +106,16 @@ The sliding-window approach reduces false positives on legitimate retries (e.g.,
### Post-Mortem Investigation (v2.40)
`/sf forensics` is a full-access SF debugger for post-mortem analysis of autonomous mode failures. It provides:
`/forensics` is a full-access SF debugger for post-mortem analysis of autonomous mode failures. It provides:
- **Anomaly detection** — structured identification of stuck loops, cost spikes, timeouts, missing artifacts, and crashes with severity levels
- **Unit traces** — last 10 unit executions with error details and execution times
- **Metrics analysis** — cost, token counts, and execution time breakdowns
- **Doctor integration** — includes structural health issues from `/sf doctor`
- **Doctor integration** — includes structural health issues from `/doctor`
- **LLM-guided investigation** — an agent session with full tool access to investigate root causes
```
/sf forensics [optional problem description]
/forensics [optional problem description]
```
See [Troubleshooting](./troubleshooting.md) for more on diagnosing issues.
@ -183,7 +183,7 @@ After a milestone completes, SF auto-generates a self-contained HTML report in `
auto_report: true # enabled by default
```
Generate manually anytime with `/sf export --html`, or generate reports for all milestones at once with `/sf export --html --all` (v2.28).
Generate manually anytime with `/export --html`, or generate reports for all milestones at once with `/export --html --all` (v2.28).
### Failure Recovery (v2.28)
@ -203,7 +203,7 @@ This linear flow is easier to debug, uses less memory (no recursive call stack),
### Real-Time Health Visibility (v2.40)
Doctor issues (from `/sf doctor`) now surface in real time across three places:
Doctor issues (from `/doctor`) now surface in real time across three places:
- **Dashboard widget** — health indicator with issue count and severity
- **Workflow visualizer** — issues shown in the status panel
@ -226,7 +226,7 @@ See [Configuration](./configuration.md) for skill routing preferences.
### Start
```
/sf autonomous
/autonomous
```
### Pause
@ -236,7 +236,7 @@ Press **Escape**. The conversation is preserved. You can interact with the agent
### Resume
```
/sf autonomous
/autonomous
```
Autonomous mode reads disk state and picks up where it left off.
@ -244,7 +244,7 @@ Autonomous mode reads disk state and picks up where it left off.
### Stop
```
/sf stop
/stop
```
Stops autonomous mode gracefully. Can be run from a different terminal.
@ -252,7 +252,7 @@ Stops autonomous mode gracefully. Can be run from a different terminal.
### Steer
```
/sf steer
/steer
```
Hard-steer plan documents during execution without stopping the pipeline. Changes are picked up at the next phase boundary.
@ -260,7 +260,7 @@ Hard-steer plan documents during execution without stopping the pipeline. Change
### Capture
```
/sf capture "add rate limiting to API endpoints"
/capture "add rate limiting to API endpoints"
```
Fire-and-forget thought capture. Captures are triaged automatically between tasks. See [Captures & Triage](./captures-triage.md).
@ -268,14 +268,14 @@ Fire-and-forget thought capture. Captures are triaged automatically between task
### Visualize
```
/sf visualize
/visualize
```
Open the workflow visualizer — interactive tabs for progress, dependencies, metrics, and timeline. See [Workflow Visualizer](./visualizer.md).
## Dashboard
`Ctrl+Alt+G` or `/sf status` shows real-time progress:
`Ctrl+Alt+G` or `/status` shows real-time progress:
- Current milestone, slice, and task
- Autonomous mode elapsed time and phase

View file

@ -9,8 +9,8 @@ Captures let you fire-and-forget thoughts during autonomous mode execution. Inst
While autonomous mode is running (or any time):
```
/sf capture "add rate limiting to the API endpoints"
/sf capture "the auth flow should support OAuth, not just JWT"
/capture "add rate limiting to the API endpoints"
/capture "the auth flow should support OAuth, not just JWT"
```
Captures are appended to `.sf/CAPTURES.md` and triaged automatically between tasks.
@ -23,7 +23,7 @@ Captures are appended to `.sf/CAPTURES.md` and triaged automatically between tas
capture → triage → confirm → resolve → resume
```
1. **Capture**`/sf capture "thought"` appends to `.sf/CAPTURES.md` with a timestamp and unique ID
1. **Capture**`/capture "thought"` appends to `.sf/CAPTURES.md` with a timestamp and unique ID
2. **Triage** — at natural seams between tasks (in `handleAgentEnd`), SF detects pending captures and classifies them
3. **Confirm** — the user is shown the proposed resolution and confirms or adjusts
4. **Resolve** — the resolution is applied (task injection, replan trigger, deferral, etc.)
@ -55,7 +55,7 @@ The LLM classifies each capture and proposes a resolution. Plan-modifying resolu
Trigger triage manually at any time:
```
/sf triage
/triage
```
This is useful when you've accumulated several captures and want to process them before the next natural seam.
@ -78,5 +78,5 @@ Captures always resolve to the **original project root's** `.sf/CAPTURES.md`, no
| Command | Description |
|---------|-------------|
| `/sf capture "text"` | Capture a thought (quotes optional for single words) |
| `/sf triage` | Manually trigger triage of pending captures |
| `/capture "text"` | Capture a thought (quotes optional for single words) |
| `/triage` | Manually trigger triage of pending captures |

View file

@ -4,78 +4,78 @@
| Command | Description |
|---------|-------------|
| `/sf` | Assisted mode — execute one unit at a time, pause between each |
| `/sf next` | Explicit assisted mode (same as `/sf`) |
| `/sf autonomous` | Autonomous product loop — research, plan, execute, commit, repeat |
| `/sf quick` | Execute a quick task with SF guarantees (atomic commits, state tracking) without full planning overhead |
| `/sf stop` | Stop autonomous mode gracefully |
| `/sf pause` | Pause autonomous mode (preserves state, `/sf autonomous` to resume) |
| `/sf steer` | Hard-steer plan documents during execution |
| `/sf discuss` | Discuss architecture and decisions (works alongside autonomous mode) |
| `/sf status` | Progress dashboard |
| `/sf widget` | Cycle dashboard widget: full / small / min / off |
| `/sf queue` | Queue and reorder future milestones (safe during autonomous mode) |
| `/sf capture` | Fire-and-forget thought capture (works during autonomous mode) |
| `/sf triage` | Manually trigger triage of pending captures |
| `/sf dispatch` | Dispatch a specific phase directly (research, plan, execute, complete, reassess, uat, replan) |
| `/sf history` | View execution history (supports `--cost`, `--phase`, `--model` filters) |
| `/sf forensics` | Full-access SF debugger — structured anomaly detection, unit traces, and LLM-guided root-cause analysis for autonomous mode failures |
| `/sf cleanup` | Clean up SF state files and stale worktrees |
| `/sf visualize` | Open workflow visualizer (progress, deps, metrics, timeline) |
| `/sf export --html` | Generate self-contained HTML report for current or completed milestone |
| `/sf export --html --all` | Generate retrospective reports for all milestones at once |
| `/sf update` | Update SF to the latest version in-session |
| `/sf knowledge` | Add persistent project knowledge (rule, pattern, or lesson) |
| `/sf fast` | Toggle service tier for supported models (prioritized API routing) |
| `/sf rate` | Rate last unit's model tier (over/ok/under) — improves adaptive routing |
| `/sf changelog` | Show categorized release notes |
| `/sf logs` | Browse activity logs, debug logs, and metrics |
| `/sf remote` | Configure remote question delivery |
| `/sf help` | Categorized command reference with descriptions for all SF subcommands |
| `/next` | Assisted mode — execute one unit at a time, pause between each |
| `/next` | Explicit assisted mode (same as `/next`) |
| `/autonomous` | Autonomous product loop — research, plan, execute, commit, repeat |
| `/quick` | Execute a quick task with SF guarantees (atomic commits, state tracking) without full planning overhead |
| `/stop` | Stop autonomous mode gracefully |
| `/pause` | Pause autonomous mode (preserves state, `/autonomous` to resume) |
| `/steer` | Hard-steer plan documents during execution |
| `/discuss` | Discuss architecture and decisions (works alongside autonomous mode) |
| `/status` | Progress dashboard |
| `/widget` | Cycle dashboard widget: full / small / min / off |
| `/queue` | Queue and reorder future milestones (safe during autonomous mode) |
| `/capture` | Fire-and-forget thought capture (works during autonomous mode) |
| `/triage` | Manually trigger triage of pending captures |
| `/dispatch` | Dispatch a specific phase directly (research, plan, execute, complete, reassess, uat, replan) |
| `/history` | View execution history (supports `--cost`, `--phase`, `--model` filters) |
| `/forensics` | Full-access SF debugger — structured anomaly detection, unit traces, and LLM-guided root-cause analysis for autonomous mode failures |
| `/cleanup` | Clean up SF state files and stale worktrees |
| `/visualize` | Open workflow visualizer (progress, deps, metrics, timeline) |
| `/export --html` | Generate self-contained HTML report for current or completed milestone |
| `/export --html --all` | Generate retrospective reports for all milestones at once |
| `/update` | Update SF to the latest version in-session |
| `/knowledge` | Add persistent project knowledge (rule, pattern, or lesson) |
| `/fast` | Toggle service tier for supported models (prioritized API routing) |
| `/rate` | Rate last unit's model tier (over/ok/under) — improves adaptive routing |
| `/changelog` | Show categorized release notes |
| `/logs` | Browse activity logs, debug logs, and metrics |
| `/remote` | Configure remote question delivery |
| `/help` | Categorized command reference with descriptions for all SF subcommands |
## Configuration & Diagnostics
| Command | Description |
|---------|-------------|
| `/sf prefs` | Model selection, timeouts, budget ceiling |
| `/sf mode` | Switch workflow mode (solo/team) with coordinated defaults for milestone IDs, git commit behavior, and documentation |
| `/sf config` | Re-run the provider setup wizard (LLM provider + tool keys) |
| `/sf keys` | API key manager — list, add, remove, test, rotate, doctor |
| `/sf doctor` | Runtime health checks with auto-fix — issues surface in real time across widget, visualizer, and HTML reports (v2.40) |
| `/sf inspect` | Show SQLite DB diagnostics |
| `/sf init` | Project init wizard — detect, configure, bootstrap `.sf/` |
| `/sf setup` | Global setup status and configuration |
| `/sf skill-health` | Skill lifecycle dashboard — usage stats, success rates, token trends, staleness warnings |
| `/sf skill-health <name>` | Detailed view for a single skill |
| `/sf skill-health --declining` | Show only skills flagged for declining performance |
| `/sf skill-health --stale N` | Show skills unused for N+ days |
| `/sf hooks` | Show configured post-unit and pre-dispatch hooks |
| `/sf run-hook` | Manually trigger a specific hook |
| `/sf migrate` | Migrate a v1 `.planning` directory to `.sf` format |
| `/prefs` | Model selection, timeouts, budget ceiling |
| `/mode` | Switch workflow mode (solo/team) with coordinated defaults for milestone IDs, git commit behavior, and documentation |
| `/config` | Re-run the provider setup wizard (LLM provider + tool keys) |
| `/keys` | API key manager — list, add, remove, test, rotate, doctor |
| `/doctor` | Runtime health checks with auto-fix — issues surface in real time across widget, visualizer, and HTML reports (v2.40) |
| `/inspect` | Show SQLite DB diagnostics |
| `/init` | Project init wizard — detect, configure, bootstrap `.sf/` |
| `/setup` | Global setup status and configuration |
| `/skill-health` | Skill lifecycle dashboard — usage stats, success rates, token trends, staleness warnings |
| `/skill-health <name>` | Detailed view for a single skill |
| `/skill-health --declining` | Show only skills flagged for declining performance |
| `/skill-health --stale N` | Show skills unused for N+ days |
| `/hooks` | Show configured post-unit and pre-dispatch hooks |
| `/run-hook` | Manually trigger a specific hook |
| `/migrate` | Migrate a v1 `.planning` directory to `.sf` format |
## Milestone Management
| Command | Description |
|---------|-------------|
| `/sf new-milestone` | Create a new milestone |
| `/sf skip` | Prevent a unit from autonomous mode dispatch |
| `/sf undo` | Revert last completed unit |
| `/sf undo-task` | Reset a specific task's completion state (DB + markdown) |
| `/sf reset-slice` | Reset a slice and all its tasks (DB + markdown) |
| `/sf park` | Park a milestone — skip without deleting |
| `/sf unpark` | Reactivate a parked milestone |
| Discard milestone | Available via `/sf` wizard → "Milestone actions" → "Discard" |
| `/new-milestone` | Create a new milestone |
| `/skip` | Prevent a unit from autonomous mode dispatch |
| `/undo` | Revert last completed unit |
| `/undo-task` | Reset a specific task's completion state (DB + markdown) |
| `/reset-slice` | Reset a slice and all its tasks (DB + markdown) |
| `/park` | Park a milestone — skip without deleting |
| `/unpark` | Reactivate a parked milestone |
| Discard milestone | Available via `/next` wizard → "Milestone actions" → "Discard" |
## Parallel Orchestration
| Command | Description |
|---------|-------------|
| `/sf parallel start` | Analyze eligibility, confirm, and start workers |
| `/sf parallel status` | Show all workers with state, progress, and cost |
| `/sf parallel stop [MID]` | Stop all workers or a specific milestone's worker |
| `/sf parallel pause [MID]` | Pause all workers or a specific one |
| `/sf parallel resume [MID]` | Resume paused workers |
| `/sf parallel merge [MID]` | Merge completed milestones back to main |
| `/parallel start` | Analyze eligibility, confirm, and start workers |
| `/parallel status` | Show all workers with state, progress, and cost |
| `/parallel stop [MID]` | Stop all workers or a specific milestone's worker |
| `/parallel pause [MID]` | Pause all workers or a specific one |
| `/parallel resume [MID]` | Resume paused workers |
| `/parallel merge [MID]` | Merge completed milestones back to main |
See [Parallel Orchestration](./parallel-orchestration.md) for full documentation.
@ -83,43 +83,43 @@ See [Parallel Orchestration](./parallel-orchestration.md) for full documentation
| Command | Description |
|---------|-------------|
| `/sf start` | Start a workflow template (bugfix, spike, feature, hotfix, refactor, security-audit, dep-upgrade, full-project) |
| `/sf start resume` | Resume an in-progress workflow |
| `/sf templates` | List available workflow templates |
| `/sf templates info <name>` | Show detailed template info |
| `/start` | Start a workflow template (bugfix, spike, feature, hotfix, refactor, security-audit, dep-upgrade, full-project) |
| `/start resume` | Resume an in-progress workflow |
| `/templates` | List available workflow templates |
| `/templates info <name>` | Show detailed template info |
## Custom Workflows (v2.42)
| Command | Description |
|---------|-------------|
| `/sf workflow new` | Create a new workflow definition (via skill) |
| `/sf workflow run <name>` | Create a run and start autonomous mode |
| `/sf workflow list` | List workflow runs |
| `/sf workflow validate <name>` | Validate a workflow definition YAML |
| `/sf workflow pause` | Pause custom workflow autonomous mode |
| `/sf workflow resume` | Resume paused custom workflow autonomous mode |
| `/workflow new` | Create a new workflow definition (via skill) |
| `/workflow run <name>` | Create a run and start autonomous mode |
| `/workflow list` | List workflow runs |
| `/workflow validate <name>` | Validate a workflow definition YAML |
| `/workflow pause` | Pause custom workflow autonomous mode |
| `/workflow resume` | Resume paused custom workflow autonomous mode |
`/sf autonomous` is the product-development loop that chooses the next useful unit from project state. `/sf start` is guided workflow kickoff and may ask clarifying questions. `/sf workflow run` executes an explicit YAML workflow definition. There is no separate `/sf auto` mode.
`/autonomous` is the product-development loop that chooses the next useful unit from project state. `/start` is guided workflow kickoff and may ask clarifying questions. `/workflow run` executes an explicit YAML workflow definition. There is no separate `/auto` mode.
## Extensions
| Command | Description |
|---------|-------------|
| `/sf extensions list` | List all extensions and their status |
| `/sf extensions enable <id>` | Enable a disabled extension |
| `/sf extensions disable <id>` | Disable an extension |
| `/sf extensions info <id>` | Show extension details |
| `/extensions list` | List all extensions and their status |
| `/extensions enable <id>` | Enable a disabled extension |
| `/extensions disable <id>` | Disable an extension |
| `/extensions info <id>` | Show extension details |
## cmux Integration
| Command | Description |
|---------|-------------|
| `/sf cmux status` | Show cmux detection, prefs, and capabilities |
| `/sf cmux on` | Enable cmux integration |
| `/sf cmux off` | Disable cmux integration |
| `/sf cmux notifications on/off` | Toggle cmux desktop notifications |
| `/sf cmux sidebar on/off` | Toggle cmux sidebar metadata |
| `/sf cmux splits on/off` | Toggle cmux visual subagent splits |
| `/cmux status` | Show cmux detection, prefs, and capabilities |
| `/cmux on` | Enable cmux integration |
| `/cmux off` | Disable cmux integration |
| `/cmux notifications on/off` | Toggle cmux desktop notifications |
| `/cmux sidebar on/off` | Toggle cmux sidebar metadata |
| `/cmux splits on/off` | Toggle cmux visual subagent splits |
## GitHub Sync (v2.39)
@ -236,7 +236,7 @@ echo "Build a CLI tool" | sf headless new-milestone --context -
**Exit codes:** `0` = complete, `1` = error or timeout, `2` = blocked.
Any `/sf` subcommand works as a positional argument — `sf headless status`, `sf headless doctor`, `sf headless dispatch execute`, etc.
Any `/next` subcommand works as a positional argument — `sf headless status`, `sf headless doctor`, `sf headless dispatch execute`, etc.
### `sf headless query`
@ -280,15 +280,15 @@ sf headless query | jq '.cost.total'
## MCP Integrations
`/sf mcp` shows configured external MCP tool servers. SF does not expose its own
workflow as an MCP server; run SF directly with `sf` or `/sf autonomous`.
`/mcp` shows configured external MCP tool servers. SF does not expose its own
workflow as an MCP server; run SF directly with `sf` or `/autonomous`.
## In-Session Update
`/sf update` checks npm for a newer version of SF and installs it without leaving the session.
`/update` checks npm for a newer version of SF and installs it without leaving the session.
```bash
/sf update
/update
# Current version: v2.36.0
# Checking npm registry...
# Updated to v2.37.0. Restart SF to use the new version.
@ -298,14 +298,14 @@ If already up to date, it reports so and takes no action.
## Export
`/sf export` generates reports of milestone work.
`/export` generates reports of milestone work.
```bash
# Generate HTML report for the active milestone
/sf export --html
/export --html
# Generate retrospective reports for ALL milestones at once
/sf export --html --all
/export --html --all
```
Reports are saved to `.sf/reports/` with a browseable `index.html` that links to all generated snapshots.

View file

@ -1,20 +1,20 @@
# Configuration
SF preferences live in `~/.sf/PREFERENCES.md` (global) or `.sf/PREFERENCES.md` (project-local). Manage interactively with `/sf prefs`.
SF preferences live in `~/.sf/PREFERENCES.md` (global) or `.sf/PREFERENCES.md` (project-local). Manage interactively with `/prefs`.
## `/sf prefs` Commands
## `/prefs` Commands
| Command | Description |
|---------|-------------|
| `/sf prefs` | Open the global preferences wizard (default) |
| `/sf prefs global` | Interactive wizard for global preferences (`~/.sf/PREFERENCES.md`) |
| `/sf prefs project` | Interactive wizard for project preferences (`.sf/PREFERENCES.md`) |
| `/sf prefs status` | Show current preference files, merged values, and skill resolution status |
| `/sf prefs wizard` | Alias for `/sf prefs global` |
| `/sf prefs setup` | Alias for `/sf prefs wizard` — creates preferences file if missing |
| `/sf prefs import-claude` | Import Claude marketplace plugins and skills as namespaced SF components |
| `/sf prefs import-claude global` | Import to global scope |
| `/sf prefs import-claude project` | Import to project scope |
| `/prefs` | Open the global preferences wizard (default) |
| `/prefs global` | Interactive wizard for global preferences (`~/.sf/PREFERENCES.md`) |
| `/prefs project` | Interactive wizard for project preferences (`.sf/PREFERENCES.md`) |
| `/prefs status` | Show current preference files, merged values, and skill resolution status |
| `/prefs wizard` | Alias for `/prefs global` |
| `/prefs setup` | Alias for `/prefs wizard` — creates preferences file if missing |
| `/prefs import-claude` | Import Claude marketplace plugins and skills as namespaced SF components |
| `/prefs import-claude global` | Import to global scope |
| `/prefs import-claude project` | Import to project scope |
## Preferences File Format
@ -50,12 +50,12 @@ token_profile: balanced
- **Array fields** (`always_use_skills`, etc.): concatenated (global first, then project)
- **Object fields** (`models`, `git`, `auto_supervisor`): shallow-merged, project overrides per-key
## Global API Keys (`/sf config`)
## Global API Keys (`/config`)
Tool API keys are stored globally in `~/.sf/agent/auth.json` and apply to all projects automatically. Set them once with `/sf config` — no need to configure per-project `.env` files.
Tool API keys are stored globally in `~/.sf/agent/auth.json` and apply to all projects automatically. Set them once with `/config` — no need to configure per-project `.env` files.
```bash
/sf config
/config
```
This opens an interactive wizard showing which keys are configured and which are missing. Select a tool to enter its key.
@ -71,7 +71,7 @@ This opens an interactive wizard showing which keys are configured and which are
### How it works
1. `/sf config` saves keys to `~/.sf/agent/auth.json`
1. `/config` saves keys to `~/.sf/agent/auth.json`
2. On every session start, `loadToolApiKeys()` reads the file and sets environment variables
3. Keys apply to all projects — no per-project setup required
4. Environment variables (`export BRAVE_API_KEY=...`) take precedence over saved keys
@ -649,7 +649,7 @@ custom_instructions:
- "Prefer functional patterns over classes"
```
For project-specific knowledge (patterns, gotchas, lessons learned), use `.sf/KNOWLEDGE.md` instead — it's injected into every agent prompt automatically. Add entries with `/sf knowledge rule|pattern|lesson <description>`.
For project-specific knowledge (patterns, gotchas, lessons learned), use `.sf/KNOWLEDGE.md` instead — it's injected into every agent prompt automatically. Add entries with `/knowledge rule|pattern|lesson <description>`.
### `RUNTIME.md` — Runtime Context (v2.39)
@ -708,7 +708,7 @@ context_management:
### `service_tier` (v2.42)
OpenAI service tier preference for supported models. Toggle with `/sf fast`.
OpenAI service tier preference for supported models. Toggle with `/fast`.
| Value | Behavior |
|-------|----------|
@ -722,7 +722,7 @@ service_tier: priority
### `forensics_dedup` (v2.43)
Opt-in: search existing issues and PRs before filing from `/sf forensics`. Uses additional AI tokens.
Opt-in: search existing issues and PRs before filing from `/forensics`. Uses additional AI tokens.
```yaml
forensics_dedup: true # default: false
@ -823,7 +823,7 @@ notifications:
auto_visualize: true
# Service tier
service_tier: priority # "priority" or "flex" (for /sf fast)
service_tier: priority # "priority" or "flex" (for /fast)
# Diagnostics
forensics_dedup: true # deduplicate before filing forensics issues

View file

@ -16,7 +16,7 @@ Data is stored in `.sf/metrics.json` and survives across sessions.
### Viewing Costs
**Dashboard:** `Ctrl+Alt+G` or `/sf status` shows real-time cost breakdown.
**Dashboard:** `Ctrl+Alt+G` or `/status` shows real-time cost breakdown.
**Aggregations available:**
- By phase (research, planning, execution, completion, reassessment)
@ -85,9 +85,9 @@ See [Token Optimization](./token-optimization.md) for details.
## Tips
- Start with `balanced` profile and a generous `budget_ceiling` to establish baseline costs
- Check `/sf status` after a few slices to see per-slice cost averages
- Check `/status` after a few slices to see per-slice cost averages
- Switch to `budget` profile for well-understood, repetitive work
- Use `quality` only when architectural decisions are being made
- Per-phase model selection lets you use Opus only for planning while keeping execution on Sonnet
- Enable `dynamic_routing` for automatic model downgrading on simple tasks — see [Dynamic Model Routing](./dynamic-model-routing.md)
- Use `/sf visualize` → Metrics tab to see where your budget is going
- Use `/visualize` → Metrics tab to see where your budget is going

View file

@ -168,9 +168,9 @@ Or configure per-phase models in preferences — see [Configuration](./configura
## Two Ways to Work
### Step Mode — `/sf`
### Assisted Mode — `/next`
Type `/sf` inside a session. SF executes one unit of work at a time, pausing between each with a wizard showing what completed and what's next.
Type `/next` inside a session. SF executes one unit of work at a time, pausing between each with a wizard showing what completed and what's next.
- **No `.sf/` directory** — starts a discussion flow to capture your project vision
- **Milestone exists, no roadmap** — discuss or research the milestone
@ -179,12 +179,12 @@ Type `/sf` inside a session. SF executes one unit of work at a time, pausing bet
Assisted mode keeps you in the loop, reviewing output between each step.
### Autonomous Mode — `/sf autonomous`
### Autonomous Mode — `/autonomous`
Type `/sf autonomous` and walk away. SF researches, plans, executes, verifies, commits, and advances through every slice until the milestone is complete. `/sf autonomous` remains available as a short alias.
Type `/autonomous` and walk away. SF researches, plans, executes, verifies, commits, and advances through every slice until the milestone is complete. `/autonomous` remains available as a short alias.
```
/sf autonomous
/autonomous
```
See [Autonomous Mode](./autonomous-mode.md) for full details.
@ -199,16 +199,16 @@ Run autonomous mode in one terminal, steer from another.
```bash
sf
/sf autonomous
/autonomous
```
**Terminal 2 — steer while it works:**
```bash
sf
/sf discuss # talk through architecture decisions
/sf status # check progress
/sf queue # queue the next milestone
/discuss # talk through architecture decisions
/status # check progress
/queue # queue the next milestone
```
Both terminals read and write the same `.sf/` files. Decisions in terminal 2 are picked up at the next phase boundary automatically.
@ -296,7 +296,7 @@ npm update -g singularity-forge
Or from within a session:
```
/sf update
/update
```
---

View file

@ -180,7 +180,7 @@ SF includes automatic recovery for common git issues:
- **Stale lock files** — removes `index.lock` files from crashed processes
- **Orphaned worktrees** — detects and offers to clean up abandoned worktrees (worktree mode only)
Run `/sf doctor` to check git health manually.
Run `/doctor` to check git health manually.
## Native Git Operations

View file

@ -6,10 +6,10 @@ If you have projects with `.planning` directories from the original Singularity
```bash
# From within the project directory
/sf migrate
/migrate
# Or specify a path
/sf migrate ~/projects/my-old-project
/migrate ~/projects/my-old-project
```
## What Gets Migrated
@ -42,7 +42,7 @@ Migration works best with a `ROADMAP.md` file for milestone structure. Without o
After migrating, verify the output with:
```
/sf doctor
/doctor
```
This checks `.sf/` integrity and flags any structural issues.

View file

@ -19,7 +19,7 @@ parallel:
2. Start parallel execution:
```
/sf parallel start
/parallel start
```
SF scans your milestones, checks dependencies and file overlap, shows an eligibility report, and spawns workers for eligible milestones.
@ -27,13 +27,13 @@ SF scans your milestones, checks dependencies and file overlap, shows an eligibi
3. Monitor progress:
```
/sf parallel status
/parallel status
```
4. Stop when done:
```
/sf parallel stop
/parallel stop
```
## How It Works
@ -143,26 +143,26 @@ parallel:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `enabled` | boolean | `false` | Master toggle. Must be `true` for `/sf parallel` commands to work. |
| `enabled` | boolean | `false` | Master toggle. Must be `true` for `/parallel` commands to work. |
| `max_workers` | number (1-4) | `2` | Maximum concurrent worker processes. Higher values use more memory and API budget. |
| `budget_ceiling` | number | none | Aggregate cost ceiling in USD across all workers. When reached, no new units are dispatched. |
| `merge_strategy` | `"per-slice"` or `"per-milestone"` | `"per-milestone"` | When worktree changes merge back to main. Per-milestone waits for the full milestone to complete. |
| `auto_merge` | `"auto"`, `"confirm"`, `"manual"` | `"confirm"` | How merge-back is handled. `confirm` prompts before merging. `manual` requires explicit `/sf parallel merge`. |
| `auto_merge` | `"auto"`, `"confirm"`, `"manual"` | `"confirm"` | How merge-back is handled. `confirm` prompts before merging. `manual` requires explicit `/parallel merge`. |
## Commands
| Command | Description |
|---------|-------------|
| `/sf parallel start` | Analyze eligibility, confirm, and start workers |
| `/sf parallel status` | Show all workers with state, units completed, and cost |
| `/sf parallel stop` | Stop all workers (sends SIGTERM) |
| `/sf parallel stop M002` | Stop a specific milestone's worker |
| `/sf parallel pause` | Pause all workers (finish current unit, then wait) |
| `/sf parallel pause M002` | Pause a specific worker |
| `/sf parallel resume` | Resume all paused workers |
| `/sf parallel resume M002` | Resume a specific worker |
| `/sf parallel merge` | Merge all completed milestones back to main |
| `/sf parallel merge M002` | Merge a specific milestone back to main |
| `/parallel start` | Analyze eligibility, confirm, and start workers |
| `/parallel status` | Show all workers with state, units completed, and cost |
| `/parallel stop` | Stop all workers (sends SIGTERM) |
| `/parallel stop M002` | Stop a specific milestone's worker |
| `/parallel pause` | Pause all workers (finish current unit, then wait) |
| `/parallel pause M002` | Pause a specific worker |
| `/parallel resume` | Resume all paused workers |
| `/parallel resume M002` | Resume a specific worker |
| `/parallel merge` | Merge all completed milestones back to main |
| `/parallel merge M002` | Merge a specific milestone back to main |
## Signal Lifecycle
@ -201,12 +201,12 @@ When milestones complete, their worktree changes need to merge back to main.
### Conflict Handling
1. `.sf/` state files (STATE.md, metrics.json, etc.) — **auto-resolved** by accepting the milestone branch version
2. Code conflicts — **stop and report**. The merge halts, showing which files conflict. Resolve manually and retry with `/sf parallel merge <MID>`.
2. Code conflicts — **stop and report**. The merge halts, showing which files conflict. Resolve manually and retry with `/parallel merge <MID>`.
### Example
```
/sf parallel merge
/parallel merge
# Merge Results
@ -214,7 +214,7 @@ When milestones complete, their worktree changes need to merge back to main.
- **M003** — CONFLICT (2 file(s)):
- `src/types.ts`
- `src/middleware.ts`
Resolve conflicts manually and run `/sf parallel merge M003` to retry.
Resolve conflicts manually and run `/parallel merge M003` to retry.
```
## Budget Management
@ -229,11 +229,11 @@ When `budget_ceiling` is set, the coordinator tracks aggregate cost across all w
### Doctor Integration
`/sf doctor` detects parallel session issues:
`/doctor` detects parallel session issues:
- **Stale parallel sessions** — Worker process died without cleanup. Doctor finds `.sf/parallel/*.status.json` files with dead PIDs or expired heartbeats and removes them.
Run `/sf doctor --fix` to clean up automatically.
Run `/doctor --fix` to clean up automatically.
### Stale Detection
@ -288,22 +288,22 @@ Set `parallel.enabled: true` in your preferences file.
### "No milestones are eligible for parallel execution"
All milestones are either complete or blocked by dependencies. Check `/sf queue` to see milestone status and dependency chains.
All milestones are either complete or blocked by dependencies. Check `/queue` to see milestone status and dependency chains.
### Worker crashed — how to recover
Workers now persist their state to disk automatically. If a worker process dies, the coordinator detects the dead PID via heartbeat expiry and marks the worker as crashed. On restart, the worker picks up from disk state — crash recovery, worktree re-entry, and completed-unit tracking carry over from the crashed session.
1. Run `/sf doctor --fix` to clean up stale sessions
2. Run `/sf parallel status` to see current state
3. Re-run `/sf parallel start` to spawn new workers for remaining milestones
1. Run `/doctor --fix` to clean up stale sessions
2. Run `/parallel status` to see current state
3. Re-run `/parallel start` to spawn new workers for remaining milestones
### Merge conflicts after parallel completion
1. Run `/sf parallel merge` to see which milestones have conflicts
1. Run `/parallel merge` to see which milestones have conflicts
2. Resolve conflicts in the worktree at `.sf/worktrees/<MID>/`
3. Retry with `/sf parallel merge <MID>`
3. Retry with `/parallel merge <MID>`
### Workers seem stuck
Check if budget ceiling was reached: `/sf parallel status` shows per-worker costs. Increase `parallel.budget_ceiling` or remove it to continue.
Check if budget ceiling was reached: `/parallel status` shows per-worker costs. Increase `parallel.budget_ceiling` or remove it to continue.

View file

@ -70,7 +70,7 @@ Or run `sf config` and paste your key when prompted.
**Runtime boundary:** SF may use Claude Code, Codex, or Gemini CLI core as
model/runtime adapters when explicitly configured. These adapters are not project
MCP dependencies, and SF does not expose its own workflow as an MCP server. Run
SF directly with `sf` or `/sf autonomous`; reserve MCP configuration for external
SF directly with `sf` or `/autonomous`; reserve MCP configuration for external
tools that SF may call.
### OpenAI
@ -596,4 +596,4 @@ If the model doesn't appear, check:
- `models.json` is valid JSON (use `cat ~/.sf/agent/models.json | python3 -m json.tool`)
- The server is running (for local providers)
For additional help, see [Troubleshooting](./troubleshooting.md) or run `/sf doctor` inside a session.
For additional help, see [Troubleshooting](./troubleshooting.md) or run `/doctor` inside a session.

View file

@ -7,7 +7,7 @@ Remote questions allow SF to ask for user input via Slack, Discord, or Telegram
### Discord
```
/sf remote discord
/remote discord
```
The setup wizard:
@ -30,7 +30,7 @@ The setup wizard:
### Slack
```
/sf remote slack
/remote slack
```
The setup wizard:
@ -48,7 +48,7 @@ The setup wizard:
### Telegram
```
/sf remote telegram
/remote telegram
```
The setup wizard:
@ -105,11 +105,11 @@ If no response is received within `timeout_minutes`, the prompt times out and SF
| Command | Description |
|---------|-------------|
| `/sf remote` | Show remote questions menu and current status |
| `/sf remote slack` | Set up Slack integration |
| `/sf remote discord` | Set up Discord integration |
| `/sf remote status` | Show current configuration and last prompt status |
| `/sf remote disconnect` | Remove remote questions configuration |
| `/remote` | Show remote questions menu and current status |
| `/remote slack` | Set up Slack integration |
| `/remote discord` | Set up Discord integration |
| `/remote status` | Show current configuration and last prompt status |
| `/remote disconnect` | Remove remote questions configuration |
## Discord vs Slack Feature Comparison

View file

@ -155,13 +155,13 @@ Every autonomous mode unit records which skills were available and actively load
### Skill Health Dashboard
View skill performance with `/sf skill-health`:
View skill performance with `/skill-health`:
```
/sf skill-health # overview table: name, uses, success%, tokens, trend, last used
/sf skill-health rust-core # detailed view for one skill
/sf skill-health --stale 30 # skills unused for 30+ days
/sf skill-health --declining # skills with falling success rates
/skill-health # overview table: name, uses, success%, tokens, trend, last used
/skill-health rust-core # detailed view for one skill
/skill-health --stale 30 # skills unused for 30+ days
/skill-health --declining # skills with falling success rates
```
The dashboard flags skills that may need attention:

View file

@ -176,12 +176,12 @@ SF tracks the success and failure of each tier assignment over time and adjusts
### User Feedback
Use `/sf rate` to submit feedback on the last completed unit's model tier:
Use `/rate` to submit feedback on the last completed unit's model tier:
```
/sf rate over # model was overpowered — encourage cheaper next time
/sf rate ok # model was appropriate — no adjustment
/sf rate under # model was too weak — encourage stronger next time
/rate over # model was overpowered — encourage cheaper next time
/rate ok # model was appropriate — no adjustment
/rate under # model was too weak — encourage stronger next time
```
Feedback signals are weighted 2× compared to automatic outcomes. Requires dynamic routing to be active (the last unit must have tier data).

View file

@ -1,11 +1,11 @@
# Troubleshooting
## `/sf doctor`
## `/doctor`
The built-in diagnostic tool validates `.sf/` integrity:
```
/sf doctor
/doctor
```
It checks:
@ -25,13 +25,13 @@ It checks:
- Stale cache after a crash — the in-memory file listing doesn't reflect new artifacts
- The LLM didn't produce the expected artifact file
**Fix:** Run `/sf doctor` to repair state, then resume with `/sf autonomous`. If the issue persists, check that the expected artifact file exists on disk.
**Fix:** Run `/doctor` to repair state, then resume with `/autonomous`. If the issue persists, check that the expected artifact file exists on disk.
### Autonomous mode stops with "Loop detected"
**Cause:** A unit failed to produce its expected artifact twice in a row.
**Fix:** Check the task plan for clarity. If the plan is ambiguous, refine it manually, then `/sf autonomous` to resume.
**Fix:** Check the task plan for clarity. If the plan is ambiguous, refine it manually, then `/autonomous` to resume.
### Wrong files in worktree
@ -58,7 +58,7 @@ echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
```
**Workaround:** Run `npx singularity-forge` or `$(npm prefix -g)/bin/sf` directly.
**Workaround:** Run `npx singularity-forge` or `$(npm prefix -g)/bin/next` directly.
**Common causes:**
- **Version manager (nvm, fnm, mise)** — global bin is version-specific; ensure your version manager initializes in your shell config
@ -93,7 +93,7 @@ models:
- openrouter/minimax/minimax-m2.5
```
**Machine surface:** `sf headless autonomous` auto-restarts the entire process on crash (default 3 attempts with exponential backoff). Combined with provider error auto-resume, this enables true overnight unattended execution.
**Machine surface:** `sf headless autonomous` restarts the process automatically on crash (default 3 attempts with exponential backoff). Combined with provider error recovery, this enables true overnight unattended execution.
For common provider setup issues (role errors, streaming errors, model ID mismatches), see the [Provider Setup Guide — Common Pitfalls](./providers.md#common-pitfalls).
@ -101,13 +101,13 @@ For common provider setup issues (role errors, streaming errors, model ID mismat
**Symptoms:** Autonomous mode pauses with "Budget ceiling reached."
**Fix:** Increase `budget_ceiling` in preferences, or switch to `budget` token profile to reduce per-unit cost, then resume with `/sf autonomous`.
**Fix:** Increase `budget_ceiling` in preferences, or switch to `budget` token profile to reduce per-unit cost, then resume with `/autonomous`.
### Stale lock file
**Symptoms:** Autonomous mode won't start, says another session is running.
**Fix:** SF automatically detects stale locks — if the owning PID is dead, the lock is cleaned up and re-acquired on the next `/sf autonomous`. This includes stranded `.sf.lock/` directories left by `proper-lockfile` after crashes. If automatic recovery fails, delete `.sf/auto.lock` and the `.sf.lock/` directory manually:
**Fix:** SF automatically detects stale locks — if the owning PID is dead, the lock is cleaned up and re-acquired on the next `/autonomous`. This includes stranded `.sf.lock/` directories left by `proper-lockfile` after crashes. If automatic recovery fails, delete `.sf/auto.lock` and the `.sf.lock/` directory manually:
```bash
rm -f .sf/auto.lock
@ -122,7 +122,7 @@ rm -rf "$(dirname .sf)/.sf.lock"
### Pre-dispatch says the milestone integration branch no longer exists
**Symptoms:** Autonomous mode or `/sf doctor` reports that a milestone recorded an integration branch that no longer exists in git.
**Symptoms:** Autonomous mode or `/doctor` reports that a milestone recorded an integration branch that no longer exists in git.
**What it means:** The milestone's `.sf/milestones/<MID>/<MID>-META.json` still points at the branch that was active when the milestone started, but that branch has since been renamed or deleted.
@ -131,11 +131,11 @@ rm -rf "$(dirname .sf)/.sf.lock"
- Safe fallbacks are:
- explicit `git.main_branch` when configured and present
- the repo's detected default integration branch (for example `main` or `master`)
- In that case `/sf doctor` reports a warning and `/sf doctor fix` rewrites the stale metadata to the effective branch.
- In that case `/doctor` reports a warning and `/doctor fix` rewrites the stale metadata to the effective branch.
- SF still blocks when no safe fallback branch can be determined.
**Fix:**
- Run `/sf doctor fix` to rewrite the stale milestone metadata automatically when the fallback is obvious.
- Run `/doctor fix` to rewrite the stale milestone metadata automatically when the fallback is obvious.
- If SF still blocks, recreate the missing branch or update your git preferences so `git.main_branch` points at a real branch.
### Transient `EBUSY` / `EPERM` / `EACCES` while writing `.sf/` files
@ -149,7 +149,7 @@ rm -rf "$(dirname .sf)/.sf.lock"
**Fix:**
- Re-run the operation; most transient lock races clear quickly.
- If the error persists, close tools that may be holding the file open and then retry.
- If repeated failures continue, run `/sf doctor` to confirm the repo state is still healthy and report the exact path + error code.
- If repeated failures continue, run `/doctor` to confirm the repo state is still healthy and report the exact path + error code.
### Node v24 web boot failure
@ -256,11 +256,11 @@ rm -rf "$(dirname .sf)/.sf.lock"
- Set required environment variables in the MCP config's `env` block
- If needed, set `cwd` explicitly in the server definition
### Session lock stolen by `/sf` in another terminal
### Session lock stolen by `/next` in another terminal
**Symptoms:** Running `/sf` (assisted mode) in a second terminal causes a running autonomous mode session to lose its lock.
**Symptoms:** Running `/next` (assisted mode) in a second terminal causes a running autonomous mode session to lose its lock.
**Fix:** Fixed in v2.36.0. Bare `/sf` no longer steals the session lock from a running autonomous mode session. Upgrade to the latest version.
**Fix:** Fixed in v2.36.0. Bare `/next` no longer steals the session lock from a running autonomous mode session. Upgrade to the latest version.
### Worktree commits landing on main instead of milestone branch
@ -285,7 +285,7 @@ rm .sf/auto.lock
rm .sf/completed-units.json
```
Then `/sf autonomous` to restart from current disk state.
Then `/autonomous` to restart from current disk state.
### Reset routing history
@ -298,7 +298,7 @@ rm .sf/routing-history.json
### Full state rebuild
```
/sf doctor
/doctor
```
Doctor derives current state from the DB-backed runtime model when available, regenerates projections such as `STATE.md`, and fixes detected inconsistencies. File-based plan and roadmap parsing is only a recovery path for unmigrated or damaged state.
@ -306,8 +306,8 @@ Doctor derives current state from the DB-backed runtime model when available, re
## Getting Help
- **GitHub Issues:** [github.com/singularity-ng/singularity-forge/issues](https://github.com/singularity-ng/singularity-forge/issues)
- **Dashboard:** `Ctrl+Alt+G` or `/sf status` for real-time diagnostics
- **Forensics:** `/sf forensics` for structured post-mortem analysis of autonomous mode failures
- **Dashboard:** `Ctrl+Alt+G` or `/status` for real-time diagnostics
- **Forensics:** `/forensics` for structured post-mortem analysis of autonomous mode failures
- **Session logs:** `.sf/activity/` contains JSONL session dumps for crash forensics
## Database Issues
@ -316,7 +316,7 @@ Doctor derives current state from the DB-backed runtime model when available, re
**Symptoms:** `sf_decision_save`, `sf_requirement_update`, or `sf_summary_save` fail with this error.
**Cause:** The SQLite database wasn't initialized. This happens in manual `/sf` sessions (non-autonomous mode) on versions before v2.29.
**Cause:** The SQLite database wasn't initialized. This happens in manual `/next` sessions (non-autonomous mode) on versions before v2.29.
**Fix:** Updated in v2.29+ to auto-initialize the database on first tool call. Upgrade to the latest version.

View file

@ -7,7 +7,7 @@ The workflow visualizer is a full-screen TUI overlay that shows project progress
## Opening the Visualizer
```
/sf visualize
/visualize
```
Or configure automatic display after milestone completion:
@ -89,7 +89,7 @@ The visualizer refreshes data from disk every 2 seconds, so it stays current if
## HTML Export (v2.26)
For shareable reports outside the terminal, use `/sf export --html`. This generates a self-contained HTML file in `.sf/reports/` with the same data as the TUI visualizer — progress tree, dependency graph (SVG DAG), cost/token bar charts, execution timeline, changelog, and knowledge base. All CSS and JS are inlined — no external dependencies. Printable to PDF from any browser.
For shareable reports outside the terminal, use `/export --html`. This generates a self-contained HTML file in `.sf/reports/` with the same data as the TUI visualizer — progress tree, dependency graph (SVG DAG), cost/token bar charts, execution timeline, changelog, and knowledge base. All CSS and JS are inlined — no external dependencies. Printable to PDF from any browser.
An auto-generated `index.html` shows all reports with progression metrics across milestones.

View file

@ -101,22 +101,48 @@ describe("Discovery adapter resolution", () => {
// ─── AuthStorage hasAuth for discovery ───────────────────────────────────────
function withoutProviderEnvAuth(fn: () => void): void {
const original = {
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
OLLAMA_API_KEY: process.env.OLLAMA_API_KEY,
ZAI_API_KEY: process.env.ZAI_API_KEY,
};
delete process.env.OPENAI_API_KEY;
delete process.env.OLLAMA_API_KEY;
delete process.env.ZAI_API_KEY;
try {
fn();
} finally {
for (const [key, value] of Object.entries(original)) {
if (value === undefined) {
delete process.env[key];
} else {
process.env[key] = value;
}
}
}
}
describe("AuthStorage — hasAuth for discovery providers", () => {
it("returns false for providers without auth", () => {
const storage = AuthStorage.inMemory({});
assert.equal(storage.hasAuth("openai"), false);
assert.equal(storage.hasAuth("ollama"), false);
assert.equal(storage.hasAuth("zai"), false);
withoutProviderEnvAuth(() => {
const storage = AuthStorage.inMemory({});
assert.equal(storage.hasAuth("openai"), false);
assert.equal(storage.hasAuth("ollama"), false);
assert.equal(storage.hasAuth("zai"), false);
});
});
it("returns true for providers with stored keys", () => {
const storage = AuthStorage.inMemory({
openai: { type: "api_key" as const, key: "sk-test" },
zai: { type: "api_key" as const, key: "zai-test" },
withoutProviderEnvAuth(() => {
const storage = AuthStorage.inMemory({
openai: { type: "api_key" as const, key: "sk-test" },
zai: { type: "api_key" as const, key: "zai-test" },
});
assert.equal(storage.hasAuth("openai"), true);
assert.equal(storage.hasAuth("ollama"), false);
assert.equal(storage.hasAuth("zai"), true);
});
assert.equal(storage.hasAuth("openai"), true);
assert.equal(storage.hasAuth("ollama"), false);
assert.equal(storage.hasAuth("zai"), true);
});
});

View file

@ -9,7 +9,34 @@ const manifestPath = join(sfRoot, "extension-manifest.json");
const RESOURCE_SOURCE_RE = /\.(?:js|mjs|cjs|json|md|yaml|yml|d\.ts)$/;
const DYNAMIC_TOOL_NAMES = ["bash", "edit", "read", "write"];
const DIRECT_COMMAND_NAMES = ["exit", "kill", "sf", "worktree", "wt"];
const BASE_DIRECT_COMMAND_NAMES = ["exit", "kill", "wt"];
const BASE_RUNTIME_COMMAND_NAMES = new Set([
"settings",
"model",
"scoped-models",
"export",
"share",
"copy",
"name",
"session",
"changelog",
"hotkeys",
"fork",
"tree",
"provider",
"login",
"logout",
"new",
"compact",
"resume",
"reload",
"thinking",
"edit-mode",
"terminal",
"stop",
"exit",
"quit",
]);
const HIDDEN_OR_ALIAS_SUBCOMMANDS = new Set([
"?",
"auto",
@ -213,6 +240,14 @@ function main() {
const manifest = parseManifest();
const registeredTools = parseRegisteredTools();
const catalogCommands = parseTopLevelCatalogCommands();
const directCommandNames = uniqueSorted(
BASE_DIRECT_COMMAND_NAMES.concat(
catalogCommands.filter(
(command) => !BASE_RUNTIME_COMMAND_NAMES.has(command),
),
),
);
const missingManifestTools = registeredTools.filter(
(tool) => !manifest.tools.includes(tool),
);
@ -236,11 +271,11 @@ function main() {
);
}
const missingManifestCommands = DIRECT_COMMAND_NAMES.filter(
const missingManifestCommands = directCommandNames.filter(
(command) => !manifest.commands.includes(command),
);
const staleManifestCommands = manifest.commands.filter(
(command) => !DIRECT_COMMAND_NAMES.includes(command),
(command) => !directCommandNames.includes(command),
);
if (missingManifestCommands.length > 0) {
failures.push(
@ -259,7 +294,6 @@ function main() {
);
}
const catalogCommands = parseTopLevelCatalogCommands();
const handledCommands = parseHandledTopLevelCommands().filter(
(command) => !HIDDEN_OR_ALIAS_SUBCOMMANDS.has(command),
);
@ -272,7 +306,7 @@ function main() {
if (missingCatalogCommands.length > 0) {
failures.push(
failSection(
"Handled /sf commands missing from TOP_LEVEL_SUBCOMMANDS",
"Handled SF commands missing from TOP_LEVEL_SUBCOMMANDS",
missingCatalogCommands,
),
);
@ -280,7 +314,7 @@ function main() {
if (unroutedCatalogCommands.length > 0) {
failures.push(
failSection(
"Catalog /sf commands with no routed handler",
"Catalog SF commands with no routed handler",
unroutedCatalogCommands,
),
);
@ -292,7 +326,7 @@ function main() {
}
console.log(
`SF extension inventory OK: ${registeredTools.length} tools, ${DIRECT_COMMAND_NAMES.length} direct commands, ${catalogCommands.length} /sf subcommands.`,
`SF extension inventory OK: ${registeredTools.length} tools, ${directCommandNames.length} direct commands, ${catalogCommands.length} catalog commands.`,
);
}

View file

@ -201,12 +201,12 @@ export function isPauseNotification(event: Record<string, unknown>): boolean {
);
}
export function isAutoResumeScheduledNotification(
export function isScheduledResumeNotification(
event: Record<string, unknown>,
): boolean {
if (event.type !== "extension_ui_request" || event.method !== "notify")
return false;
return /auto-resuming in \d+s/i.test(String(event.message ?? ""));
return /resuming automatically in \d+s/i.test(String(event.message ?? ""));
}
export function isBlockedNotification(event: Record<string, unknown>): boolean {

View file

@ -42,13 +42,13 @@ import {
EXIT_SUCCESS,
FIRE_AND_FORGET_METHODS,
IDLE_TIMEOUT_MS,
isAutoResumeScheduledNotification,
isBlockedNotification,
isInteractiveHeadlessTool,
isMilestoneReadyNotification,
isMilestoneReadyText,
isPauseNotification,
isQuickCommand,
isScheduledResumeNotification,
isTerminalNotification,
MULTI_TURN_DEADLOCK_BACKSTOP_MS,
mapStatusToExitCode,
@ -193,7 +193,7 @@ export interface HeadlessOptions {
contextText?: string; // inline text
chainAutonomous?: boolean; // chain into autonomous mode after milestone creation
verbose?: boolean; // show tool calls in output
maxRestarts?: number; // auto-restart on crash (default 3, 0 to disable)
maxRestarts?: number; // automatic restart on crash (default 3, 0 to disable)
supervised?: boolean; // supervised mode: forward interactive requests to orchestrator
responseTimeout?: number; // timeout for orchestrator response (default 30000ms)
answers?: string; // path to answers JSON file
@ -828,7 +828,7 @@ async function runHeadlessOnce(
process.stderr.write(`[headless] doctor failed: ${msg}\n`);
exitCode = 1;
}
// Bypass the auto-restart loop in runHeadless — doctor is a one-shot
// Bypass the automatic restart loop in runHeadless — doctor is a one-shot
// diagnostic; exit 1 means "issues detected", not "crashed".
process.exit(exitCode);
}
@ -1628,7 +1628,7 @@ async function runHeadlessOnce(
const waitForProviderAutoResume =
providerAutoResumePending && isPauseNotification(eventObj);
if (isAutoResumeScheduledNotification(eventObj)) {
if (isScheduledResumeNotification(eventObj)) {
providerAutoResumePending = true;
}

View file

@ -1,5 +1,5 @@
/**
* Direct phase dispatch handles manual /sf dispatch commands.
* Direct phase dispatch handles manual /dispatch commands.
* Resolves phase name unit type + prompt, creates a session, and sends the message.
*/
import { pauseAuto } from "./auto.js";
@ -62,7 +62,7 @@ export async function dispatchDirectPhase(ctx, pi, phase, base) {
?.require_slice_discussion;
if (requireDiscussion && !sliceContextFile) {
ctx.ui.notify(
`Slice ${sid} requires discussion before planning. Run /sf discuss to discuss this slice, then /sf autonomous to resume.`,
`Slice ${sid} requires discussion before planning. Run /discuss to discuss this slice, then /autonomous to resume.`,
"info",
);
await pauseAuto(ctx, pi);

View file

@ -115,7 +115,7 @@ const PARALLEL_RESEARCH_BLOCKING_PHASES = new Set([
function missingSliceStop(mid, phase) {
return {
action: "stop",
reason: `${mid}: phase "${phase}" has no active slice — run /sf doctor.`,
reason: `${mid}: phase "${phase}" has no active slice — run /doctor.`,
level: "error",
};
}
@ -149,7 +149,7 @@ async function readMilestoneValidationForDispatch(basePath, mid) {
function canonicalPlanStop(mid, plan) {
return {
action: "stop",
reason: `${mid}: canonical milestone plan unavailable (${plan.source}): ${plan.reason} Run /sf doctor or regenerate structured roadmap state before dispatching autonomous mode work.`,
reason: `${mid}: canonical milestone plan unavailable (${plan.source}): ${plan.reason} Run /doctor or regenerate structured roadmap state before dispatching autonomous mode work.`,
level: "error",
};
}
@ -563,7 +563,7 @@ export const DISPATCH_RULES = [
// ADR-011 Phase 2 (SF ADR): mid-execution escalation handling.
// Autonomous mode is autonomous, so by default we accept the agent's
// recommendation and continue — the user can review/override later via
// `/sf escalate list --all`. Set `phases.escalation_auto_accept: false`
// `/escalate list --all`. Set `phases.escalation_auto_accept: false`
// to keep SF's pause-and-ask behavior.
// Must evaluate FIRST — phase-agnostic rules below (rewrite-docs gate,
// UAT checks, reassess) cannot run while a task is paused.
@ -583,7 +583,7 @@ export const DISPATCH_RULES = [
state.activeSlice.id,
state.activeTask.id,
"accept",
"autonomous mode: accepted agent recommendation; user can override via /sf escalate",
"autonomous mode: accepted agent recommendation; user can override via /escalate",
"autonomous mode",
);
if (result.status === "resolved") {
@ -601,7 +601,7 @@ export const DISPATCH_RULES = [
action: "stop",
reason:
state.nextAction ||
`${mid}: task escalation awaits user resolution. Run /sf escalate list to see pending items.`,
`${mid}: task escalation awaits user resolution. Run /escalate list to see pending items.`,
level: "info",
};
},
@ -783,7 +783,7 @@ export const DISPATCH_RULES = [
) {
return {
action: "stop",
reason: `UAT verdict for ${check.sliceId} is "${check.verdict}" — blocking progression until resolved.\nReview the UAT result and update the verdict to PASS, or re-run /sf auto after fixing.`,
reason: `UAT verdict for ${check.sliceId} is "${check.verdict}" — blocking progression until resolved.\nReview the UAT result and update the verdict to PASS, or re-run /auto after fixing.`,
level: "warning",
};
}
@ -1602,7 +1602,7 @@ export const DISPATCH_RULES = [
if (missingSlices.length > 0) {
return {
action: "stop",
reason: `Cannot complete milestone ${mid}: slices ${missingSlices.join(", ")} are missing SUMMARY files. Run /sf doctor to diagnose.`,
reason: `Cannot complete milestone ${mid}: slices ${missingSlices.join(", ")} are missing SUMMARY files. Run /doctor to diagnose.`,
level: "error",
};
}
@ -1844,7 +1844,7 @@ export async function resolveDispatch(ctx) {
// (e.g. after reassessment modifies the roadmap and state needs re-derivation).
const unhandled = {
action: "stop",
reason: `Unhandled phase "${ctx.state.phase}" — run /sf doctor to diagnose.`,
reason: `Unhandled phase "${ctx.state.phase}" — run /doctor to diagnose.`,
level: "warning",
matchedRule: "<no-match>",
};

View file

@ -69,7 +69,7 @@ export class ModelPolicyDispatchBlockedError extends Error {
// LIFECYCLE: the baseline is tied to a single auto session, NOT to the
// lifetime of the `pi` instance (which can outlive many auto runs and have
// the user mutate tools between them). `clearToolBaseline` MUST be called
// at auto start AND auto stop so that a second `/sf autonomous` run on the same
// at auto start AND auto stop so that a second `/autonomous` run on the same
// `pi` does not silently restore a stale snapshot from the prior run and
// undo any tool changes the user made between sessions.
const TOOL_BASELINE = new WeakMap();
@ -267,7 +267,7 @@ export async function selectAndApplyModel(
/** When false (interactive/guided-flow), skip dynamic routing and use the session model.
* Dynamic routing only applies in autonomous mode where cost optimization is expected. (#3962) */
isAutoMode = true,
/** Explicit /sf model pin captured at bootstrap for long-running auto loops. */
/** Explicit /model pin captured at bootstrap for long-running auto loops. */
sessionModelOverride,
/** Thinking level captured at autonomous mode start and re-applied after model swaps. */
autoModeStartThinkingLevel,
@ -615,7 +615,7 @@ export async function selectAndApplyModel(
}
// Skip models the provider has previously rejected for this account
// (issue #4513). The block is persisted in .sf/runtime/blocked-models.json
// so it survives /sf autonomous restarts — without this, the same dead model
// so it survives /autonomous restarts — without this, the same dead model
// gets reselected after every restart.
if (isModelBlocked(basePath, model.provider, model.id)) {
ctx.ui.notify(

View file

@ -268,15 +268,15 @@ export function detectRogueFileWrites(unitType, unitId, basePath) {
return rogues;
}
export const STEP_COMPLETE_FALLBACK_MESSAGE =
"Step complete. Run /clear, then /sf to continue (or /sf autonomous to run continuously).";
"Step complete. Run /clear, then /to continue (or /autonomous to run continuously).";
export function buildStepCompleteMessage(nextState) {
if (nextState.phase === "complete") {
return "Step complete — milestone finished. Run /sf status to review, or start the next milestone.";
return "Step complete — milestone finished. Run /status to review, or start the next milestone.";
}
const next = describeNextUnit(nextState);
return (
`Step complete. Next: ${next.label}\n` +
`Run /clear, then /sf to continue (or /sf autonomous to run continuously).`
`Run /clear, then /to continue (or /autonomous to run continuously).`
);
}
export const USER_DRIVEN_DEEP_UNITS = new Set([
@ -826,7 +826,7 @@ export async function postUnitPreVerification(pctx, opts) {
if (err instanceof MergeConflictError) {
ctx.ui.notify(
`slice-cadence merge conflict in ${sid}: ${err.conflictedFiles.join(", ")}. ` +
`Resolve manually on main and run \`/sf autonomous\` to resume.`,
`Resolve manually on main and run \`/autonomous\` to resume.`,
"error",
);
// Stop auto AND signal the outer postUnit flow to exit early.
@ -1265,7 +1265,7 @@ export async function postUnitPreVerification(pctx, opts) {
s.verificationRetryCount.delete(retryKey);
s.pendingVerificationRetry = null;
ctx.ui.notify(
`Milestone ${s.currentUnit.id} verification failed after ${MAX_VERIFICATION_RETRIES} retries — worktree branch preserved. Re-run /sf autonomous once blockers are resolved.`,
`Milestone ${s.currentUnit.id} verification failed after ${MAX_VERIFICATION_RETRIES} retries — worktree branch preserved. Re-run /autonomous once blockers are resolved.`,
"error",
);
await pauseAuto(ctx, pi);
@ -1914,8 +1914,8 @@ export async function postUnitPostVerification(pctx) {
}
}
// Assisted mode → show wizard instead of dispatch.
// Without this notify(), /sf in assisted mode finishes a unit and silently
// exits the loop, leaving the user with no hint to /clear and /sf again.
// Without this notify(), /in assisted mode finishes a unit and silently
// exits the loop, leaving the user with no hint to /clear and /again.
if (s.stepMode) {
try {
const nextState = await deriveState(s.basePath);

View file

@ -1774,7 +1774,7 @@ export async function buildExecuteTaskPrompt(
"}",
"```",
"",
"Provide 24 options with concrete tradeoffs. The recommendation must reference one of the option ids. Autonomous mode accepts your recommendation, persists the choice + rationale as a memory, and carries it forward as a hard constraint for downstream tasks. The operator can review the audit trail later via `/sf escalate list --all`; the executed work itself can't be retroactively undone, so document your reasoning thoroughly. Set `continueWithDefault: false` only when the choice is severe enough that the loop should pause for human review even in autonomous mode (rare).",
"Provide 24 options with concrete tradeoffs. The recommendation must reference one of the option ids. Autonomous mode accepts your recommendation, persists the choice + rationale as a memory, and carries it forward as a hard constraint for downstream tasks. The operator can review the audit trail later via `/escalate list --all`; the executed work itself can't be retroactively undone, so document your reasoning thoroughly. Set `continueWithDefault: false` only when the choice is severe enough that the loop should pause for human review even in autonomous mode (rare).",
].join("\n")
: "";
// Apply knowledge injection for this task context

View file

@ -225,7 +225,7 @@ export function auditOrphanedMilestoneBranches(basePath, isolationMode) {
warnings.push(
`Branch ${branch} has ${commitsAhead} commit(s) ahead of ${mainBranch} for in-progress milestone ${milestoneId}.` +
wtSuffix +
` Run \`/sf autonomous\` to resume, or merge manually if abandoning.`,
` Run \`/autonomous\` to resume, or merge manually if abandoning.`,
);
// #4764 telemetry
try {
@ -302,7 +302,7 @@ export function auditOrphanedMilestoneBranches(basePath, isolationMode) {
// Branch is NOT merged — preserve for safety, warn the user
warnings.push(
`Branch ${branch} exists for completed milestone ${milestoneId} but is NOT merged into ${mainBranch}. ` +
`This may contain unmerged work. Merge manually or run \`/sf health --fix\` to resolve.`,
`This may contain unmerged work. Merge manually or run \`/doctor fix\` to resolve.`,
);
// #4764 telemetry
try {
@ -354,12 +354,12 @@ export async function bootstrapAutoSession(
// phase-specific planning model for a discuss turn (#2829).
//
// Precedence:
// 1) Explicit session override via /sf model (this session)
// 1) Explicit session override via /model (this session)
// 2) SF model preferences from PREFERENCES.md (validated against live auth)
// 3) Current session model from settings/session restore (if provider ready)
//
// This preserves #3517 defaults while honoring explicit runtime model
// selection for subsequent /sf runs in the same session.
// selection for subsequent /runs in the same session.
//
// Exception (#4122): when the session provider is a custom provider declared
// in ~/.sf/agent/models.json (Ollama, vLLM, OpenAI-compatible proxy, etc.),
@ -618,7 +618,7 @@ export async function bootstrapAutoSession(
hasSurvivorBranch = false;
} else {
ctx.ui.notify(
"Discussion completed but milestone draft was not promoted. Run /sf to try again.",
"Discussion completed but milestone draft was not promoted. Run /next to try again.",
"warning",
);
return releaseLockAndReturn();
@ -656,7 +656,7 @@ export async function bootstrapAutoSession(
s.consecutiveCompleteBootstraps = 0;
ctx.ui.notify(
"All milestones are complete and the discussion didn't produce a new one. " +
"Run /sf to start a new milestone manually.",
"Run /next to start a new milestone manually.",
"warning",
);
return releaseLockAndReturn();
@ -849,14 +849,14 @@ export async function bootstrapAutoSession(
state = postState;
} else {
ctx.ui.notify(
"Discussion completed but milestone context is still missing. Run /sf to try again.",
"Discussion completed but milestone context is still missing. Run /next to try again.",
"warning",
);
return releaseLockAndReturn();
}
} else {
ctx.ui.notify(
"Discussion completed but milestone context is still missing. Run /sf to try again.",
"Discussion completed but milestone context is still missing. Run /next to try again.",
"warning",
);
return releaseLockAndReturn();
@ -876,7 +876,7 @@ export async function bootstrapAutoSession(
state = postState;
} else {
ctx.ui.notify(
"Discussion completed but milestone draft was not promoted. Run /sf to try again.",
"Discussion completed but milestone draft was not promoted. Run /next to try again.",
"warning",
);
return releaseLockAndReturn();

View file

@ -297,7 +297,7 @@ export async function recoverTimedOutUnit(
lastRecoveryReason: reason,
});
ctx.ui.notify(
`Milestone ${unitId} ${reason}-recovery exhausted ${maxRecoveryAttempts} attempt(s): ${diagnostic}. Worktree branch preserved. Re-run /sf autonomous once blockers are resolved.`,
`Milestone ${unitId} ${reason}-recovery exhausted ${maxRecoveryAttempts} attempt(s): ${diagnostic}. Worktree branch preserved. Re-run /autonomous once blockers are resolved.`,
"error",
);
return "paused";

View file

@ -112,7 +112,7 @@ export function getToolCallCountSnapshot() {
const TOOL_INVOCATION_ERROR_RE =
/Validation failed for tool|Expected ',' or '\}'(?: after property value)?(?: in JSON)?|Unexpected end of JSON|Unexpected token.*in JSON/i;
const DETERMINISTIC_POLICY_ERROR_RE =
/(?:^|\b)(?:HARD BLOCK:|Blocked: \/sf queue is a planning tool|Direct writes to \.sf\/STATE\.md and \.sf\/sf\.db are blocked|This is a mechanical gate)/i;
/(?:^|\b)(?:HARD BLOCK:|Blocked: \/queue is a planning tool|Direct writes to \.sf\/STATE\.md and \.sf\/next\.db are blocked|This is a mechanical gate)/i;
/**
* Known deterministic policy error substrings. Each entry is a stable string
* that will appear in the tool error text content when the corresponding

View file

@ -589,8 +589,8 @@ export function stopAutoRemote(projectRoot) {
/**
* Check if a remote autonomous mode session is running (from a different process).
* Reads the crash lock, checks PID liveness, and returns session details.
* Used by the guard in commands.ts to prevent bare /sf, /sf next, and
* /sf autonomous from stealing the session lock.
* Used by the guard in commands.ts to prevent bare /next, /next, and
* /autonomous from stealing the session lock.
*/
export function checkRemoteAutoSession(projectRoot) {
const lock = readCrashLock(projectRoot);
@ -705,7 +705,7 @@ function cleanupAfterLoopExit(ctx) {
clearUnitTimeout();
restoreProjectRootEnv();
restoreMilestoneLockEnv();
// Clear crash lock and release session lock so the next `/sf next` does
// Clear crash lock and release session lock so the next `/next` does
// not see a stale lock with the current PID and treat it as a "remote"
// session (which would cause it to SIGTERM itself). (#2730)
try {
@ -1109,7 +1109,7 @@ export async function stopAuto(ctx, pi, reason) {
`auto-exit telemetry failed: ${err instanceof Error ? err.message : String(err)}`,
);
}
// Drop the active-tool baseline so a subsequent /sf autonomous run on the
// Drop the active-tool baseline so a subsequent /autonomous run on the
// same `pi` instance recaptures from the live tool set rather than
// restoring this session's snapshot and silently undoing any tool
// changes the user made between sessions (#4959 / CodeRabbit).
@ -1120,7 +1120,7 @@ export async function stopAuto(ctx, pi, reason) {
}
/**
* Pause autonomous mode without destroying state. Context is preserved.
* The user can interact with the agent, then `/sf autonomous` resumes
* The user can interact with the agent, then `/autonomous` resumes
* from disk state. Called when the user presses Escape during autonomous mode.
*/
export async function pauseAuto(ctx, _pi, _errorContext) {
@ -1220,7 +1220,7 @@ export async function pauseAuto(ctx, _pi, _errorContext) {
ctx?.ui?.setWidget?.("sf-progress", undefined);
ctx?.ui.setFooter(undefined);
if (ctx) initHealthWidget(ctx);
const resumeCmd = s.stepMode ? "/sf next" : "/sf autonomous";
const resumeCmd = s.stepMode ? "/next" : "/autonomous";
ctx?.ui.notify(
`${s.stepMode ? "Step" : "Autonomous"} mode paused (Escape). Type to interact, or ${resumeCmd} to resume.`,
"info",

View file

@ -433,7 +433,7 @@ export async function autoLoop(ctx, pi, s, deps) {
pi,
`Memory pressure: heap at ${mem.heapMB}MB / ${mem.limitMB}MB (${Math.round(mem.pct * 100)}%). ` +
`Stopping gracefully to prevent OOM kill after ${iteration} iterations. ` +
`Resume with /sf autonomous to continue from where you left off.`,
`Resume with /autonomous to continue from where you left off.`,
);
finishTurn("stopped", "timeout", "memory-pressure");
break;

View file

@ -118,7 +118,7 @@ import {
MAX_RECOVERY_CHARS,
} from "./types.js";
// ─── Session timeout auto-resume state ────────────────────────────────────────
// ─── Session timeout scheduled resume state ────────────────────────────────────────
let consecutiveSessionTimeouts = 0;
const MAX_SESSION_TIMEOUT_AUTO_RESUMES = 3;
function resetConsecutiveSessionTimeouts() {
@ -174,7 +174,7 @@ export function _resolveReportBasePath(s) {
* milestone at different transition points).
*
* The audit is fired with a "no-gaps" placeholder verdict. Re-run
* `/sf product-audit` manually for full LLM-powered gap analysis.
* `/product-audit` manually for full LLM-powered gap analysis.
*/
async function maybeFireProductAudit(s, ctx) {
const mid = s.currentMilestoneId;
@ -186,7 +186,7 @@ async function maybeFireProductAudit(s, ctx) {
milestoneId: mid,
verdict: "no-gaps",
summary:
"Auto-fired placeholder audit at milestone merge. Re-run `/sf product-audit` for full LLM-powered gap analysis.",
"Auto-fired placeholder audit at milestone merge. Re-run `/product-audit` for full LLM-powered gap analysis.",
gaps: [],
};
const result = await handleProductAudit(params, s.basePath);
@ -478,7 +478,7 @@ export async function runPreDispatch(ic, loopState) {
});
ctx.ui.notify(
healthGate.reason ||
"Pre-dispatch health check failed — run /sf doctor for details.",
"Pre-dispatch health check failed — run /doctor for details.",
"error",
);
await deps.pauseAuto(ctx, pi);
@ -565,7 +565,7 @@ export async function runPreDispatch(ic, loopState) {
milestoneId: state.activeMilestone?.id ?? undefined,
});
ctx.ui.notify(
`Plan gate failed-closed: ${reason}\n\nIf this keeps happening, try: /sf doctor heal`,
`Plan gate failed-closed: ${reason}\n\nIf this keeps happening, try: /doctor heal`,
"error",
);
await deps.pauseAuto(ctx, pi);
@ -680,7 +680,7 @@ export async function runPreDispatch(ic, loopState) {
);
const vizPrefs = prefs;
if (vizPrefs?.auto_visualize) {
ctx.ui.notify("Run /sf visualize to see progress overview.", "info");
ctx.ui.notify("Run /visualize to see progress overview.", "info");
}
if (vizPrefs?.auto_report !== false) {
try {
@ -705,7 +705,7 @@ export async function runPreDispatch(ic, loopState) {
if (mergeErr instanceof MergeConflictError) {
// Real code conflicts — stop the loop instead of retrying forever (#2330)
ctx.ui.notify(
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /sf autonomous to resume.`,
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -721,7 +721,7 @@ export async function runPreDispatch(ic, loopState) {
error: String(mergeErr),
});
ctx.ui.notify(
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /sf autonomous to resume.`,
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -816,7 +816,7 @@ export async function runPreDispatch(ic, loopState) {
} catch (mergeErr) {
if (mergeErr instanceof MergeConflictError) {
ctx.ui.notify(
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /sf autonomous to resume.`,
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -831,7 +831,7 @@ export async function runPreDispatch(ic, loopState) {
error: String(mergeErr),
});
ctx.ui.notify(
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /sf autonomous to resume.`,
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -866,12 +866,12 @@ export async function runPreDispatch(ic, loopState) {
);
} else if (state.phase === "blocked") {
const blockerMsg = `Blocked: ${state.blockers.join(", ")}`;
// Pause instead of hard-stop so the session is resumable with `/sf autonomous`.
// Pause instead of hard-stop so the session is resumable with `/autonomous`.
// Hard-stop here was causing premature termination when slice dependencies
// were temporarily unresolvable (e.g. after reassessment added new slices).
await deps.pauseAuto(ctx, pi);
ctx.ui.notify(
`${blockerMsg}. Fix and run /sf autonomous to resume.`,
`${blockerMsg}. Fix and run /autonomous to resume.`,
"warning",
);
deps.sendDesktopNotification(
@ -952,7 +952,7 @@ export async function runPreDispatch(ic, loopState) {
} catch (mergeErr) {
if (mergeErr instanceof MergeConflictError) {
ctx.ui.notify(
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /sf autonomous to resume.`,
`Merge conflict: ${mergeErr.conflictedFiles.join(", ")}. Resolve conflicts manually and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -967,7 +967,7 @@ export async function runPreDispatch(ic, loopState) {
error: String(mergeErr),
});
ctx.ui.notify(
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /sf autonomous to resume.`,
`Merge failed: ${mergeErr instanceof Error ? mergeErr.message : String(mergeErr)}. Resolve and run /autonomous to resume.`,
"error",
);
await deps.stopAuto(
@ -1013,7 +1013,7 @@ export async function runPreDispatch(ic, loopState) {
}
await deps.pauseAuto(ctx, pi);
ctx.ui.notify(
`${blockerMsg}. Fix and run /sf autonomous to resume.`,
`${blockerMsg}. Fix and run /autonomous to resume.`,
"warning",
);
deps.sendDesktopNotification(
@ -1067,7 +1067,7 @@ export async function runDispatch(ic, preData, loopState) {
});
// Warning-level stops are recoverable human checkpoints (e.g. UAT verdict
// gate) — pause instead of hard-stopping so the session is resumable with
// `/sf autonomous`. Error/info-level stops remain hard stops for infrastructure
// `/autonomous`. Error/info-level stops remain hard stops for infrastructure
// failures and terminal conditions respectively.
// See: https://github.com/singularity-forge/sf-run/issues/2474
if (dispatchResult.level === "warning") {
@ -1533,7 +1533,7 @@ export async function runGuards(ic, mid, unitType, unitId, sliceId) {
}
if (budgetEnforcementAction === "pause") {
ctx.ui.notify(
`${msg} Pausing autonomous mode — /sf autonomous to override and continue.`,
`${msg} Pausing autonomous mode — /autonomous to override and continue.`,
"warning",
);
deps.sendDesktopNotification(
@ -1647,7 +1647,7 @@ export async function runGuards(ic, mid, unitType, unitId, sliceId) {
) {
const msg = `Context window at ${contextUsage.percent}% (threshold: ${contextThreshold}%). Pausing to prevent truncated output.`;
ctx.ui.notify(
`${msg} Run /sf autonomous to continue (will start fresh session).`,
`${msg} Run /autonomous to continue (will start fresh session).`,
"warning",
);
deps.sendDesktopNotification(
@ -2392,7 +2392,7 @@ export async function runUnitPhase(ic, iterData, loopState, sidecarItem) {
return { action: "break", reason: "provider-pause" };
}
// Timeout category covers two distinct scenarios:
// 1. Session creation timeout (120s) — transient, auto-resume with backoff
// 1. Session creation timeout (120s) — transient, scheduled resume with backoff
// 2. Unit hard timeout (30min+) — stuck agent, pause for manual review
// Structural errors (TypeError, is not a function) are NOT transient
// and must hard-stop to avoid infinite retry loops.
@ -2462,7 +2462,7 @@ export async function runUnitPhase(ic, iterData, loopState, sidecarItem) {
);
return { action: "break", reason: "session-timeout" };
}
// Unit hard timeout (30min+): pause without auto-resume — stuck agent
// Unit hard timeout (30min+): pause without scheduled resume — stuck agent
ctx.ui.notify(
`Unit timed out for ${unitType} ${unitId} (supervision may have failed). Pausing autonomous mode.`,
"warning",

View file

@ -41,7 +41,7 @@ export class AutoSession {
stepMode = false;
/**
* When false, the agent is forbidden from calling ask_user_questions.
* Assisted mode sets this true; `/sf autonomous` sets it false.
* Assisted mode sets this true; `/autonomous` sets it false.
*/
canAskUser = true;
verbose = false;
@ -78,7 +78,7 @@ export class AutoSession {
autoModeStartModel = null;
autoModeStartThinkingLevel = null;
originalThinkingLevel = null;
/** Explicit /sf model pin captured at bootstrap (session-scoped policy override). */
/** Explicit /model pin captured at bootstrap (session-scoped policy override). */
manualSessionModelOverride = null;
currentUnitModel = null;
/** Fully-qualified model ID (provider/id) set after selectAndApplyModel + hook overrides (#2899). */

View file

@ -1,5 +1,5 @@
/**
* autonomous-command-args.js validates `/sf autonomous` command arguments.
* autonomous-command-args.js validates `/autonomous` command arguments.
*
* Purpose: keep autonomous run control strict and explainable by accepting only
* the small documented argument set; invented knobs fail as unsupported input.
@ -18,7 +18,7 @@ const MILESTONE_TARGET_RE = /^M\d+(?:-[a-z0-9]{6})?$/i;
* Purpose: reject stale or invented knobs before they can be confused with run
* control, permission profiles, or output formats.
*
* Consumer: headless machine-surface validation and `/sf autonomous` routing.
* Consumer: headless machine-surface validation and `/autonomous` routing.
*/
export function findUnsupportedAutonomousArgs(args) {
const unsupported = [];
@ -49,5 +49,5 @@ export function findUnsupportedAutonomousArgs(args) {
* Consumer: headless.ts and the autonomous command handler.
*/
export function formatUnsupportedAutonomousArgs(args) {
return `Unsupported /sf autonomous argument(s): ${args.join(", ")}. Supported arguments: --verbose, --debug, --yolo <file>, and optional M### milestone target.`;
return `Unsupported /autonomous argument(s): ${args.join(", ")}. Supported arguments: --verbose, --debug, --yolo <file>, and optional M### milestone target.`;
}

View file

@ -5,7 +5,7 @@
* plane using the same task fixtures, deterministic assertions, and solver
* observability signals.
*
* Consumer: `/sf solver-eval` and focused regression tests.
* Consumer: `/solver-eval` and focused regression tests.
*/
import { spawnSync } from "node:child_process";
import {
@ -111,7 +111,7 @@ function normalizeCase(raw, index, source) {
* Purpose: make solver claims reproducible from versioned or shared fixtures
* instead of ad hoc manual demos.
*
* Consumer: `/sf solver-eval --cases <path>`.
* Consumer: `/solver-eval --cases <path>`.
*/
export function loadAutonomousSolverEvalCases(casesPath) {
const abs = resolve(casesPath);
@ -131,7 +131,7 @@ export function loadAutonomousSolverEvalCases(casesPath) {
* Purpose: let operators verify the eval harness itself without spending model
* quota or configuring external benchmark datasets.
*
* Consumer: `/sf solver-eval --sample` and tests.
* Consumer: `/solver-eval --sample` and tests.
*/
export function sampleAutonomousSolverEvalCases() {
return [
@ -368,7 +368,7 @@ function resolveOutputDir(basePath, runId) {
* Purpose: produce local evidence for whether SF's solver loop improves
* completion quality over a raw loop under identical task fixtures.
*
* Consumer: `/sf solver-eval run` and regression tests.
* Consumer: `/solver-eval run` and regression tests.
*/
export function runAutonomousSolverEval(options) {
const basePath = resolve(options.basePath ?? process.cwd());
@ -433,7 +433,7 @@ export function runAutonomousSolverEval(options) {
/**
* Run and record the built-in autonomous solver eval as a best-effort lifecycle hook.
*
* Purpose: make solver quality evidence automatic for `/sf autonomous` sessions
* Purpose: make solver quality evidence automatic for `/autonomous` sessions
* so regressions are captured without requiring a separate manual command.
*
* Consumer: auto/loop.js when an autonomous session exits.
@ -491,12 +491,12 @@ export async function runAutomaticAutonomousSolverEval(options) {
}
/**
* Parse `/sf solver-eval` arguments.
* Parse `/solver-eval` arguments.
*
* Purpose: keep command behavior explicit and reproducible while avoiding
* shell parsing or hidden defaults.
*
* Consumer: `/sf solver-eval` handler.
* Consumer: `/solver-eval` handler.
*/
export function parseAutonomousSolverEvalArgs(raw) {
const tokens = String(raw ?? "")
@ -588,14 +588,14 @@ async function recordEvalRunBestEffort(basePath, report) {
async function notifySolverEvalHistory(ctx, basePath, limit) {
if (!(await ensureDbOpen(basePath))) {
ctx.ui.notify("No SF database available. Run /sf init first.", "warning");
ctx.ui.notify("No SF database available. Run /init first.", "warning");
return;
}
const { listSolverEvalRuns } = await import("./sf-db.js");
const runs = listSolverEvalRuns(limit);
if (runs.length === 0) {
ctx.ui.notify(
"No solver eval runs recorded. Run /sf solver-eval --sample.",
"No solver eval runs recorded. Run /solver-eval --sample.",
"info",
);
return;
@ -618,7 +618,7 @@ async function notifySolverEvalHistory(ctx, basePath, limit) {
async function notifySolverEvalShow(ctx, basePath, runId) {
if (!(await ensureDbOpen(basePath))) {
ctx.ui.notify("No SF database available. Run /sf init first.", "warning");
ctx.ui.notify("No SF database available. Run /init first.", "warning");
return;
}
const { getSolverEvalCaseResults, getSolverEvalRun } = await import(
@ -645,7 +645,7 @@ async function notifySolverEvalShow(ctx, basePath, runId) {
}
/**
* Handle `/sf solver-eval`.
* Handle `/solver-eval`.
*
* Purpose: expose solver-loop benchmarking as a first-class SF operation with
* evidence stored under `.sf`, not as an external script.
@ -662,7 +662,7 @@ export async function handleAutonomousSolverEval(
args = parseAutonomousSolverEvalArgs(rawArgs);
} catch (err) {
ctx.ui.notify(
`Usage: /sf solver-eval [run|history|show <run-id>] [--sample | --cases <jsonl>] [--run-id <id>] [--limit <n>]\n${err instanceof Error ? err.message : String(err)}`,
`Usage: /solver-eval [run|history|show <run-id>] [--sample | --cases <jsonl>] [--run-id <id>] [--limit <n>]\n${err instanceof Error ? err.message : String(err)}`,
"warning",
);
return;

View file

@ -220,7 +220,7 @@ export function buildAutonomousSolverPromptBlock(state) {
return [
"## Autonomous Solver Loop Contract",
"",
`You are inside /sf autonomous iteration ${state.iteration} of ${state.maxIterations} for ${state.unitType} ${state.unitId}.`,
`You are inside /autonomous iteration ${state.iteration} of ${state.maxIterations} for ${state.unitType} ${state.unitId}.`,
"",
"This is SF's built-in solver loop. It is not a separate Ralph workflow. Work one bounded, useful chunk; preserve enough state for the next autonomous iteration to continue without guessing.",
"",
@ -317,7 +317,7 @@ export function appendAutonomousSolverCheckpoint(basePath, params) {
* Purpose: status surfaces and loop enforcement need one structured source for
* the active solver unit instead of scraping markdown projections.
*
* Consumer: /sf status, sf-progress, and runUnitPhase.
* Consumer: /status, sf-progress, and runUnitPhase.
*/
export function readAutonomousSolverState(basePath) {
return readJson(statePath(basePath));
@ -526,10 +526,10 @@ export function assessAutonomousSolverTurn(basePath, unitType, unitId) {
/**
* Append user steering for the next autonomous solver iteration.
*
* Purpose: active /sf steer must redirect the next bounded iteration without
* Purpose: active /steer must redirect the next bounded iteration without
* interrupting the current tool batch or forcing an immediate agent turn.
*
* Consumer: /sf steer while autonomous mode is active.
* Consumer: /steer while autonomous mode is active.
*/
export function appendAutonomousSolverSteering(basePath, text, metadata = {}) {
const trimmed = String(text ?? "").trim();

View file

@ -2,7 +2,7 @@
// provider has rejected at request time for account entitlement or temporary
// capacity reasons.
//
// Lives at `.sf/runtime/blocked-models.json` so the block survives /sf autonomous
// Lives at `.sf/runtime/blocked-models.json` so the block survives /autonomous
// restarts. Autonomous mode model selection skips blocked entries; agent-end
// recovery adds entries when a runtime rejection is classified as
// `unsupported-model`. See issue #4513.

View file

@ -208,7 +208,7 @@ export async function handleAgentEnd(pi, event, ctx) {
}
// ── 1c. Unsupported-model: provider rejected this model for the current
// account/plan at request time (#4513). Persist a block so the
// same dead model isn't reselected on the next /sf autonomous restart,
// same dead model isn't reselected on the next /autonomous restart,
// then try a fallback before pausing.
if (cls.kind === "unsupported-model") {
const rejectedProvider = currentRoute?.provider;
@ -294,7 +294,7 @@ export async function handleAgentEnd(pi, event, ctx) {
});
if (switched) return;
}
// --- Transient fallback exhausted: pause without same-route auto-resume ---
// --- Transient fallback exhausted: pause without same-route scheduled resume ---
if (isTransient(cls)) {
const message =
isModelRouteFailure(cls) && dash.currentUnit

View file

@ -931,7 +931,7 @@ export function registerDbTools(pi) {
label: "Autonomous Checkpoint",
description:
"Record a PDD-shaped autonomous solver checkpoint for the current unit. " +
"Use this before ending every /sf autonomous unit turn to make progress, blockers, decisions, and remaining work explicit.",
"Use this before ending every /autonomous unit turn to make progress, blockers, decisions, and remaining work explicit.",
promptSnippet:
"Checkpoint autonomous solver progress with PDD fields and semantic outcome",
promptGuidelines: [
@ -1720,7 +1720,7 @@ export function registerDbTools(pi) {
Type.Object({
id: Type.String({
description:
"Short id (e.g. 'A', 'B') used by /sf escalate resolve.",
"Short id (e.g. 'A', 'B') used by /escalate resolve.",
}),
label: Type.String({ description: "One-line label." }),
tradeoffs: Type.String({
@ -1741,7 +1741,7 @@ export function registerDbTools(pi) {
}),
continueWithDefault: Type.Boolean({
description:
"When true, loop continues (artifact logged for later review). When false, autonomous mode pauses until the user resolves via /sf escalate resolve.",
"When true, loop continues (artifact logged for later review). When false, autonomous mode pauses until the user resolves via /escalate resolve.",
}),
},
{
@ -2006,7 +2006,7 @@ export function registerDbTools(pi) {
updateSliceStatus(params.milestoneId, params.sliceId, "skipped");
invalidateStateCache();
// Rebuild STATE.md so it reflects the skip immediately (#3477).
// Without this, /sf autonomous reads stale STATE.md and resumes the skipped slice.
// Without this, /autonomous reads stale STATE.md and resumes the skipped slice.
try {
const basePath = process.cwd();
const { rebuildState } = await import("../doctor.js");

View file

@ -39,7 +39,7 @@ export async function resumeAutoAfterProviderDelay(
}
// Reset the transient retry counter before restarting — without this,
// consecutiveTransientCount accumulates across pause/resume cycles and
// permanently locks out auto-resume after MAX_TRANSIENT_AUTO_RESUMES errors.
// permanently locks out scheduled resume after MAX_TRANSIENT_AUTO_RESUMES errors.
deps.resetTransientRetryState();
await deps.startAuto(commandCtx, pi, snapshot.basePath, false, {
step: snapshot.stepMode,

View file

@ -119,7 +119,7 @@ async function runSessionStartupDoctorFix(ctx) {
const summary = summarizeDoctorIssues(report.issues);
if (summary.errors > 0) {
ctx.ui?.notify?.(
`Startup doctor found ${summary.errors} blocking issue(s). Run /sf doctor audit for details.`,
`Startup doctor found ${summary.errors} blocking issue(s). Run /doctor audit for details.`,
"warning",
);
}
@ -276,7 +276,7 @@ export function registerHooks(pi, ecosystemHandlers = []) {
);
}
// Forge-only: high/critical entries are queued as hidden follow-up repair
// work on startup, even outside /sf autonomous. The drain helper owns claim TTL
// work on startup, even outside /autonomous. The drain helper owns claim TTL
// and delivery failure retry, so this is safe to call opportunistically.
const highBlocked = triage.stillBlocked.filter(
(e) => e.severity === "high" || e.severity === "critical",
@ -471,7 +471,7 @@ export function registerHooks(pi, ecosystemHandlers = []) {
completedWork: `Task ${state.activeTask.id} (${state.activeTask.title}) was in progress when compaction occurred.`,
remainingWork: "Check the task plan for remaining steps.",
decisions: "Check task summary files for prior decisions.",
context: "Session was auto-compacted by Pi. Resume with /sf.",
context: "Session was auto-compacted by Pi. Resume with /next.",
nextAction: `Resume task ${state.activeTask.id}: ${state.activeTask.title}.`,
}),
);
@ -573,7 +573,7 @@ export function registerHooks(pi, ecosystemHandlers = []) {
}
}
// ── Queue-mode execution guard (#2545): block source-code mutations ──
// When /sf queue is active, the agent should only create milestones,
// When /queue is active, the agent should only create milestones,
// not execute work. Block write/edit to non-.sf/ paths and bash commands
// that would modify files.
if (isQueuePhaseActive()) {

View file

@ -17,7 +17,7 @@ export function registerShortcuts(pi) {
const openDashboardOverlay = async (ctx) => {
const basePath = projectRoot();
if (!existsSync(join(basePath, ".sf"))) {
ctx.ui.notify("No .sf/ directory found. Run /sf to start.", "info");
ctx.ui.notify("No .sf/ directory found. Run /next to start.", "info");
return;
}
await ctx.ui.custom(
@ -50,7 +50,7 @@ export function registerShortcuts(pi) {
const parallelDir = join(basePath, ".sf", "parallel");
if (!existsSync(parallelDir)) {
ctx.ui.notify(
"No parallel workers found. Run /sf parallel start first.",
"No parallel workers found. Run /parallel start first.",
"info",
);
return;
@ -102,5 +102,5 @@ export function registerShortcuts(pi) {
handler: openParallelOverlay,
});
// No Ctrl+Shift+P fallback — conflicts with cycleModelBackward (shift+ctrl+p).
// Use Ctrl+Alt+P or /sf parallel watch instead.
// Use Ctrl+Alt+P or /parallel watch instead.
}

View file

@ -183,7 +183,7 @@ export async function buildBeforeAgentStartResult(event, ctx) {
if (autoEnableCmuxPreferences()) {
loadedPreferences = loadEffectiveSFPreferences();
ctx.ui.notify(
"cmux detected — auto-enabled. Run /sf cmux off to disable.",
"cmux detected — auto-enabled. Run /cmux off to disable.",
"info",
);
}
@ -278,7 +278,7 @@ export async function buildBeforeAgentStartResult(event, ctx) {
? rawContent.slice(0, MAX_CODEBASE_CHARS) +
"\n\n*(truncated — see .sf/CODEBASE.md for full map)*"
: rawContent;
codebaseBlock = `\n\n[PROJECT CODEBASE — File structure and descriptions (generated ${generatedAt}, auto-refreshed when SF detects tracked file changes; use /sf codebase stats for status)]\n\n${content}`;
codebaseBlock = `\n\n[PROJECT CODEBASE — File structure and descriptions (generated ${generatedAt}, auto-refreshed when SF detects tracked file changes; use /codebase stats for status)]\n\n${content}`;
}
} catch (e) {
logWarning("bootstrap", `CODEBASE file read failed: ${e.message}`);

View file

@ -492,7 +492,7 @@ export function shouldBlockQueueExecutionInSnapshot(
return {
block: true,
reason:
`Blocked: /sf queue is a planning tool — it creates milestones, not executes work. ` +
`Blocked: /queue is a planning tool — it creates milestones, not executes work. ` +
`Cannot ${toolName} to "${input}" during queue mode. ` +
`Write CONTEXT.md files and update PROJECT.md/QUEUE.md instead.`,
};
@ -503,7 +503,7 @@ export function shouldBlockQueueExecutionInSnapshot(
return {
block: true,
reason:
`Blocked: /sf queue is a planning tool — it creates milestones, not executes work. ` +
`Blocked: /queue is a planning tool — it creates milestones, not executes work. ` +
`Cannot run "${input.slice(0, 80)}${input.length > 80 ? "…" : ""}" during queue mode. ` +
`Use read-only commands (cat, grep, git log, etc.) to investigate, then write planning artifacts.`,
};
@ -512,6 +512,6 @@ export function shouldBlockQueueExecutionInSnapshot(
// bypass execution restrictions.
return {
block: true,
reason: `Blocked: /sf queue is a planning tool — it creates milestones, not executes work. Unknown tools are not permitted during queue mode.`,
reason: `Blocked: /queue is a planning tool — it creates milestones, not executes work. Unknown tools are not permitted during queue mode.`,
};
}

View file

@ -236,7 +236,7 @@ export function getCanonicalMilestonePlan(basePath, milestoneId, options = {}) {
return blockedResult(
`db-missing-${dbPlan.missing}`,
milestoneId,
`.sf/sf.db is available, so ${milestoneId} must be read from DB rows. Missing ${dbPlan.missing}; projection files are export/recovery only. Run /sf doctor or sf recover to reconcile.`,
`.sf/sf.db is available, so ${milestoneId} must be read from DB rows. Missing ${dbPlan.missing}; projection files are export/recovery only. Run /doctor or sf recover to reconcile.`,
paths,
);
}

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf add-tests
* SF Command /add-tests
*
* Generates tests for a completed slice by dispatching an LLM prompt
* with implementation context (summaries, changed files, test patterns).
@ -101,7 +101,7 @@ export async function handleAddTests(args, ctx, pi) {
const targetId = args.trim() || findLastCompletedSlice(basePath, milestoneId);
if (!targetId) {
ctx.ui.notify(
"No completed slices found. Specify a slice ID: /sf add-tests S03",
"No completed slices found. Specify a slice ID: /add-tests S03",
"warning",
);
return;

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf backlog
* SF Command /backlog
*
* Structured backlog management with 999.x numbering.
* Items live in `.sf/sf.db`.
@ -31,7 +31,7 @@ async function listBacklog(basePath, ctx) {
const items = currentItems(basePath);
if (items.length === 0) {
ctx.ui.notify(
"Backlog is empty. Add items with /sf backlog add <title>",
"Backlog is empty. Add items with /backlog add <title>",
"info",
);
return;
@ -50,7 +50,7 @@ async function listBacklog(basePath, ctx) {
async function addBacklogItem(basePath, title, ctx) {
if (!title) {
ctx.ui.notify("Usage: /sf backlog add <title>", "warning");
ctx.ui.notify("Usage: /backlog add <title>", "warning");
return;
}
if (!ensureBacklogDb(basePath)) {
@ -71,7 +71,7 @@ async function addBacklogItem(basePath, title, ctx) {
async function promoteBacklogItem(basePath, itemId, ctx, _pi) {
if (!itemId) {
ctx.ui.notify(
"Usage: /sf backlog promote <id>\nExample: /sf backlog promote 999.1",
"Usage: /backlog promote <id>\nExample: /backlog promote 999.1",
"warning",
);
return;
@ -102,7 +102,7 @@ async function promoteBacklogItem(basePath, itemId, ctx, _pi) {
async function removeBacklogItem(basePath, itemId, ctx) {
if (!itemId) {
ctx.ui.notify("Usage: /sf backlog remove <id>", "warning");
ctx.ui.notify("Usage: /backlog remove <id>", "warning");
return;
}
if (!ensureBacklogDb(basePath)) {

View file

@ -1,4 +1,4 @@
import { importExtensionModule } from "@singularity-forge/pi-coding-agent";
import { registerSFCommands } from "./commands/index.js";
import { workflowTemplateCommandDefinitions } from "./workflow-templates.js";
const TOP_LEVEL_SUBCOMMANDS = [
@ -11,7 +11,7 @@ const TOP_LEVEL_SUBCOMMANDS = [
{ cmd: "stop", desc: "Stop autonomous mode gracefully" },
{
cmd: "pause",
desc: "Pause autonomous mode (preserves state, /sf autonomous to resume)",
desc: "Pause autonomous mode (preserves state, /autonomous to resume)",
},
{ cmd: "status", desc: "Progress dashboard" },
{ cmd: "visualize", desc: "Open workflow visualizer" },
@ -79,7 +79,7 @@ function filterStartsWith(partial, options, prefix = "") {
description: option.desc,
}));
}
function getSfArgumentCompletions(prefix) {
function _getSfArgumentCompletions(prefix) {
const parts = prefix.trim().split(/\s+/);
if (parts.length <= 1) {
return filterStartsWith(parts[0] ?? "", TOP_LEVEL_SUBCOMMANDS);
@ -348,15 +348,5 @@ function getSfArgumentCompletions(prefix) {
return null;
}
export function registerLazySFCommand(pi) {
pi.registerCommand("sf", {
description: "SF — Singularity Forge",
getArgumentCompletions: getSfArgumentCompletions,
handler: async (args, ctx) => {
const { handleSFCommand } = await importExtensionModule(
import.meta.url,
"./commands.js",
);
await handleSFCommand(args, ctx, pi);
},
});
registerSFCommands(pi);
}

View file

@ -178,7 +178,7 @@ export async function handleCmux(args, ctx) {
return;
}
ctx.ui.notify(
"Usage: /sf cmux <status|on|off|notifications on|notifications off|sidebar on|sidebar off|splits on|splits off|browser on|browser off>",
"Usage: /cmux <status|on|off|notifications on|notifications off|sidebar on|sidebar off|splits on|splits off|browser on|browser off>",
"info",
);
}

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf codebase
* SF Command /codebase
*
* Generate and manage the codebase map (.sf/CODEBASE.md).
* Subcommands: generate, update, stats, indexer, help
@ -15,7 +15,7 @@ import {
import { loadEffectiveSFPreferences } from "./preferences.js";
const USAGE =
"Usage: /sf codebase [generate|update|stats|indexer]\n\n" +
"Usage: /codebase [generate|update|stats|indexer]\n\n" +
" generate [--max-files N] [--collapse-threshold N] — Generate or regenerate CODEBASE.md\n" +
" update [--max-files N] [--collapse-threshold N] — Refresh the CODEBASE.md cache immediately\n" +
" stats — Show file count, coverage, and generation time\n" +
@ -69,7 +69,7 @@ export async function handleCodebase(args, ctx, _pi) {
const existing = readCodebaseMap(basePath);
if (!existing) {
ctx.ui.notify(
"No codebase map found. Run /sf codebase generate to create one.",
"No codebase map found. Run /codebase generate to create one.",
"warning",
);
return;
@ -100,7 +100,7 @@ export async function handleCodebase(args, ctx, _pi) {
return;
}
ctx.ui.notify(
`Unknown /sf codebase indexer action "${action}". Use status.`,
`Unknown /codebase indexer action "${action}". Use status.`,
"warning",
);
return;
@ -126,7 +126,7 @@ function showStats(basePath, ctx) {
const stats = getCodebaseMapStats(basePath);
if (!stats.exists) {
ctx.ui.notify(
"No codebase map found. Run /sf codebase generate to create one.",
"No codebase map found. Run /codebase generate to create one.",
"info",
);
return;
@ -142,7 +142,7 @@ function showStats(basePath, ctx) {
` Undescribed: ${stats.undescribedCount}\n` +
` Generated: ${stats.generatedAt ?? "unknown"}\n\n` +
(stats.undescribedCount > 0
? `Tip: Auto-refresh keeps the cache current, but /sf codebase update forces an immediate refresh.`
? `Tip: Auto-refresh keeps the cache current, but /codebase update forces an immediate refresh.`
: `Coverage is complete.`),
"info",
);

View file

@ -21,11 +21,11 @@ function formatSessionLine(prefix, session) {
}
function usageText() {
return [
"Usage: /sf debug <issue-text>",
" /sf debug list",
" /sf debug status <slug>",
" /sf debug continue <slug>",
" /sf debug --diagnose [<slug> | <issue text>]",
"Usage: /debug <issue-text>",
" /debug list",
" /debug status <slug>",
" /debug continue <slug>",
" /debug --diagnose [<slug> | <issue text>]",
].join("\n");
}
export function parseDebugCommand(args) {
@ -42,7 +42,7 @@ export function parseDebugCommand(args) {
if (parts.length === 1)
return {
type: "error",
message: "Missing slug. Usage: /sf debug status <slug>",
message: "Missing slug. Usage: /debug status <slug>",
};
if (parts.length === 2 && isValidSlugCandidate(parts[1]))
return { type: "status", slug: parts[1] };
@ -52,7 +52,7 @@ export function parseDebugCommand(args) {
if (parts.length === 1)
return {
type: "error",
message: "Missing slug. Usage: /sf debug continue <slug>",
message: "Missing slug. Usage: /debug continue <slug>",
};
if (parts.length === 2 && isValidSlugCandidate(parts[1]))
return { type: "continue", slug: parts[1] };
@ -67,7 +67,7 @@ export function parseDebugCommand(args) {
return {
type: "error",
message:
"Invalid diagnose target. Usage: /sf debug --diagnose [<slug> | <issue text>]",
"Invalid diagnose target. Usage: /debug --diagnose [<slug> | <issue text>]",
};
}
if (head.startsWith("-") && !SUBCOMMANDS.has(head)) {
@ -106,7 +106,7 @@ export async function handleDebug(args, ctx, pi) {
formatSessionLine("Session:", s),
`Artifact: ${created.artifactPath}`,
`Log: ${s.logPath}`,
`Next: /sf debug status ${s.slug} or /sf debug continue ${s.slug}`,
`Next: /debug status ${s.slug} or /debug continue ${s.slug}`,
].join("\n") + dispatchNote,
"info",
);
@ -129,7 +129,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
ctx.ui.notify(
`Debug dispatch failed: ${msg}\nSession '${s.slug}' is persisted; retry with /sf debug continue ${s.slug}`,
`Debug dispatch failed: ${msg}\nSession '${s.slug}' is persisted; retry with /debug continue ${s.slug}`,
"warning",
);
}
@ -137,7 +137,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
ctx.ui.notify(
`Unable to create debug session: ${message}\nTry /sf debug --diagnose for artifact health details.`,
`Unable to create debug session: ${message}\nTry /debug --diagnose for artifact health details.`,
"error",
);
}
@ -148,7 +148,7 @@ export async function handleDebug(args, ctx, pi) {
const listed = listDebugSessions(basePath);
if (listed.sessions.length === 0 && listed.malformed.length === 0) {
ctx.ui.notify(
"No debug sessions found. Start one with: /sf debug <issue-text>",
"No debug sessions found. Start one with: /debug <issue-text>",
"info",
);
return;
@ -169,13 +169,13 @@ export async function handleDebug(args, ctx, pi) {
if (listed.malformed.length > 5) {
lines.push(` ... and ${listed.malformed.length - 5} more`);
}
lines.push("Run /sf debug --diagnose for remediation guidance.");
lines.push("Run /debug --diagnose for remediation guidance.");
}
ctx.ui.notify(lines.join("\n"), "info");
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
ctx.ui.notify(
`Unable to list debug sessions: ${message}\nRun /sf debug --diagnose for details.`,
`Unable to list debug sessions: ${message}\nRun /debug --diagnose for details.`,
"warning",
);
}
@ -186,7 +186,7 @@ export async function handleDebug(args, ctx, pi) {
const loaded = loadDebugSession(basePath, parsed.slug);
if (!loaded) {
ctx.ui.notify(
`Unknown debug session slug '${parsed.slug}'. Run /sf debug list to see available sessions.`,
`Unknown debug session slug '${parsed.slug}'. Run /debug list to see available sessions.`,
"warning",
);
return;
@ -209,7 +209,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
ctx.ui.notify(
`Unable to load debug session '${parsed.slug}': ${message}\nTry /sf debug --diagnose ${parsed.slug}`,
`Unable to load debug session '${parsed.slug}': ${message}\nTry /debug --diagnose ${parsed.slug}`,
"warning",
);
}
@ -220,14 +220,14 @@ export async function handleDebug(args, ctx, pi) {
const loaded = loadDebugSession(basePath, parsed.slug);
if (!loaded) {
ctx.ui.notify(
`Unknown debug session slug '${parsed.slug}'. Run /sf debug list to see available sessions.`,
`Unknown debug session slug '${parsed.slug}'. Run /debug list to see available sessions.`,
"warning",
);
return;
}
if (loaded.session.status === "resolved") {
ctx.ui.notify(
`Session '${parsed.slug}' is resolved. Open a new session with /sf debug <issue-text> for follow-up work.`,
`Session '${parsed.slug}' is resolved. Open a new session with /debug <issue-text> for follow-up work.`,
"warning",
);
return;
@ -336,7 +336,7 @@ export async function handleDebug(args, ctx, pi) {
`Resumed debug session: ${resumed.session.slug}`,
formatSessionLine("Session:", resumed.session),
`Log: ${resumed.session.logPath}`,
`Next: /sf debug status ${resumed.session.slug}`,
`Next: /debug status ${resumed.session.slug}`,
].join("\n") + dispatchNote,
"info",
);
@ -366,7 +366,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
ctx.ui.notify(
`Continue dispatch failed: ${msg}\nSession '${resumed.session.slug}' is persisted; retry with /sf debug continue ${resumed.session.slug}`,
`Continue dispatch failed: ${msg}\nSession '${resumed.session.slug}' is persisted; retry with /debug continue ${resumed.session.slug}`,
"warning",
);
}
@ -374,7 +374,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
ctx.ui.notify(
`Unable to continue debug session '${parsed.slug}': ${message}\nTry /sf debug --diagnose ${parsed.slug}`,
`Unable to continue debug session '${parsed.slug}': ${message}\nTry /debug --diagnose ${parsed.slug}`,
"warning",
);
}
@ -396,7 +396,7 @@ export async function handleDebug(args, ctx, pi) {
`Artifact: ${created.artifactPath}`,
`Log: ${s.logPath}`,
`dispatchMode=find_root_cause_only`,
`Next: /sf debug status ${s.slug} or /sf debug --diagnose ${s.slug}`,
`Next: /debug status ${s.slug} or /debug --diagnose ${s.slug}`,
].join("\n"),
"info",
);
@ -420,7 +420,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (err) {
const msg = err instanceof Error ? err.message : String(err);
ctx.ui.notify(
`Diagnose dispatch failed: ${msg}\nSession '${s.slug}' is persisted; continue manually with /sf debug continue ${s.slug}`,
`Diagnose dispatch failed: ${msg}\nSession '${s.slug}' is persisted; continue manually with /debug continue ${s.slug}`,
"warning",
);
}
@ -428,7 +428,7 @@ export async function handleDebug(args, ctx, pi) {
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
ctx.ui.notify(
`Unable to create diagnose session: ${message}\nTry /sf debug --diagnose for artifact health details.`,
`Unable to create diagnose session: ${message}\nTry /debug --diagnose for artifact health details.`,
"error",
);
}
@ -441,7 +441,7 @@ export async function handleDebug(args, ctx, pi) {
const loaded = loadDebugSession(basePath, parsed.slug);
if (!loaded) {
ctx.ui.notify(
`Diagnose: session '${parsed.slug}' not found.\nRun /sf debug list to discover valid slugs.`,
`Diagnose: session '${parsed.slug}' not found.\nRun /debug list to discover valid slugs.`,
"warning",
);
return;

View file

@ -1,8 +1,8 @@
/**
* SF Command /sf do
* SF Command /do
*
* Routes freeform natural language to the correct /sf subcommand
* using keyword matching. Falls back to /sf quick for task-like input.
* Routes freeform natural language to the correct /subcommand
* using keyword matching. Falls back to /quick for task-like input.
*/
import { importExtensionModule } from "@singularity-forge/pi-coding-agent";
@ -116,12 +116,12 @@ function matchRoute(input) {
export async function handleDo(args, ctx, pi) {
if (!args.trim()) {
ctx.ui.notify(
"Usage: /sf do <what you want to do>\n\n" +
"Usage: /do <what you want to do>\n\n" +
"Examples:\n" +
" /sf do show me progress\n" +
" /sf do run autonomously\n" +
" /sf do clean up old branches\n" +
" /sf do fix the login bug",
" /do show me progress\n" +
" /do run autonomously\n" +
" /do clean up old branches\n" +
" /do fix the login bug",
"warning",
);
return;
@ -131,7 +131,7 @@ export async function handleDo(args, ctx, pi) {
const fullCommand = match.remainingArgs
? `${match.command} ${match.remainingArgs}`
: match.command;
ctx.ui.notify(`→ /sf ${fullCommand}`, "info");
ctx.ui.notify(`→ /${fullCommand}`, "info");
// Re-dispatch through the main dispatcher
const { handleSFCommand } = await importExtensionModule(
import.meta.url,
@ -141,7 +141,7 @@ export async function handleDo(args, ctx, pi) {
return;
}
// No keyword match → treat as quick task
ctx.ui.notify(`→ /sf quick ${args}`, "info");
ctx.ui.notify(`→ /quick ${args}`, "info");
const { handleQuick } = await importExtensionModule(
import.meta.url,
"./quick.js",

View file

@ -1,4 +1,4 @@
// SF Command — `/sf escalate` (SF ADR-011 P2)
// SF Command — `/escalate` (SF ADR-011 P2)
//
// Subcommands:
// list [--all] — show active escalations; --all also includes resolved
@ -18,7 +18,7 @@ import {
function usage() {
return [
"Usage: /sf escalate <subcommand>",
"Usage: /escalate <subcommand>",
"",
"Subcommands:",
" list [--all] List active escalations (--all also shows resolved)",
@ -35,7 +35,7 @@ function parseSliceTask(spec) {
export async function handleEscalate(args, ctx) {
await ensureDbOpen(process.cwd());
if (!isDbAvailable()) {
ctx.ui.notify("SF database is not available. Run /sf doctor.", "error");
ctx.ui.notify("SF database is not available. Run /doctor.", "error");
return;
}
const trimmed = args.trim();
@ -85,7 +85,7 @@ export async function handleEscalate(args, ctx) {
ctx.ui.notify(
includeResolved
? "No escalations recorded."
: "No active escalations. Use /sf escalate list --all to include resolved.",
: "No active escalations. Use /escalate list --all to include resolved.",
"info",
);
return;
@ -98,7 +98,7 @@ export async function handleEscalate(args, ctx) {
const parsed = spec ? parseSliceTask(spec) : null;
if (!parsed) {
ctx.ui.notify(
"Usage: /sf escalate show <sliceId>/<taskId> (e.g. S01/T01)",
"Usage: /escalate show <sliceId>/<taskId> (e.g. S01/T01)",
"warning",
);
return;
@ -145,7 +145,7 @@ export async function handleEscalate(args, ctx) {
);
} else {
out.push(
`\nUnresolved. Run /sf escalate resolve ${parsed.sliceId}/${parsed.taskId} <option-id|accept>`,
`\nUnresolved. Run /escalate resolve ${parsed.sliceId}/${parsed.taskId} <option-id|accept>`,
);
}
ctx.ui.notify(out.join("\n"), "info");
@ -156,7 +156,7 @@ export async function handleEscalate(args, ctx) {
const parsed = spec ? parseSliceTask(spec) : null;
if (!parsed) {
ctx.ui.notify(
"Usage: /sf escalate resolve <sliceId>/<taskId> <option> [-- <rationale>]",
"Usage: /escalate resolve <sliceId>/<taskId> <option> [-- <rationale>]",
"warning",
);
return;

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf eval-review
* SF Command /eval-review
*
* Audits the implemented evaluation strategy of a slice against the planned
* `AI-SPEC.md` and observed `SUMMARY.md`. Dispatches an LLM turn that scores
@ -63,7 +63,7 @@ const READ_MARKER_RESERVE_BYTES = 128;
const SPEC_MARKER_RESERVE_BYTES = 128;
/** Below this many bytes left for spec we skip reading and emit only a marker. */
const MIN_USEFUL_SPEC_BYTES = 256;
const USAGE = "Usage: /sf eval-review <sliceId> [--force] [--show] (e.g. S07)";
const USAGE = "Usage: /eval-review <sliceId> [--force] [--show] (e.g. S07)";
// ─── Argument parsing ─────────────────────────────────────────────────────────
/**
* Typed error thrown by {@link parseEvalReviewArgs} on argument validation
@ -134,7 +134,7 @@ export function parseEvalReviewArgs(raw) {
* - `no-slice-dir` likely a typo in the slice ID, milestone exists but
* slice does not.
* - `no-summary` slice exists but `SUMMARY.md` is missing; the user
* probably skipped `/sf execute-phase`.
* probably skipped `/execute-phase`.
* - `ready` audit can run.
*
* AI-SPEC.md is optional in every state where the slice directory exists
@ -483,7 +483,7 @@ export function planEvalReviewAction(args, detected, existingPath) {
}
// ─── Handler entry ────────────────────────────────────────────────────────────
/**
* Handle `/sf eval-review <sliceId> [--force] [--show]`.
* Handle `/eval-review <sliceId> [--force] [--show]`.
*
* Workflow:
* 1. Parse and validate args (path-traversal-safe).
@ -519,7 +519,7 @@ export async function handleEvalReview(args, ctx, pi) {
const state = await deriveState(basePath);
if (!state.activeMilestone) {
ctx.ui.notify(
"No active milestone — start or resume one before running /sf eval-review.",
"No active milestone — start or resume one before running /eval-review.",
"warning",
);
return;
@ -541,7 +541,7 @@ export async function handleEvalReview(args, ctx, pi) {
if (action.kind === "show") {
if (!action.path) {
ctx.ui.notify(
`No EVAL-REVIEW.md present for ${parsed.sliceId}. Run /sf eval-review ${parsed.sliceId} to generate one.`,
`No EVAL-REVIEW.md present for ${parsed.sliceId}. Run /eval-review ${parsed.sliceId} to generate one.`,
"warning",
);
return;
@ -560,7 +560,7 @@ export async function handleEvalReview(args, ctx, pi) {
}
if (action.kind === "no-summary") {
ctx.ui.notify(
`Slice ${parsed.sliceId} exists but has no SUMMARY.md — run /sf execute-phase first to generate one.`,
`Slice ${parsed.sliceId} exists but has no SUMMARY.md — run /execute-phase first to generate one.`,
"warning",
);
return;

View file

@ -1,5 +1,5 @@
/**
* SF Extensions Command /sf extensions
* SF Extensions Command /extensions
*
* Manage the extension registry: list, enable, disable, info.
* Self-contained no imports outside the extensions tree (extensions are loaded
@ -105,7 +105,7 @@ function discoverManifests() {
}
// ─── Command Handler ────────────────────────────────────────────────────────
/**
* Handler for /sf extensions subcommands (list, enable, disable, info).
* Handler for /extensions subcommands (list, enable, disable, info).
*/
export async function handleExtensions(args, ctx) {
const parts = args.split(/\s+/).filter(Boolean);
@ -127,7 +127,7 @@ export async function handleExtensions(args, ctx) {
return;
}
ctx.ui.notify(
`Unknown: /sf extensions ${subCmd}. Usage: /sf extensions [list|enable|disable|info]`,
`Unknown: /extensions ${subCmd}. Usage: /extensions [list|enable|disable|info]`,
"warning",
);
}
@ -180,13 +180,13 @@ function handleList(ctx) {
*/
function handleEnable(id, ctx) {
if (!id) {
ctx.ui.notify("Usage: /sf extensions enable <id>", "warning");
ctx.ui.notify("Usage: /extensions enable <id>", "warning");
return;
}
const manifests = discoverManifests();
if (!manifests.has(id)) {
ctx.ui.notify(
`Extension "${id}" not found. Run /sf extensions list to see available extensions.`,
`Extension "${id}" not found. Run /extensions list to see available extensions.`,
"warning",
);
return;
@ -209,14 +209,14 @@ function handleEnable(id, ctx) {
}
function handleDisable(id, reason, ctx) {
if (!id) {
ctx.ui.notify("Usage: /sf extensions disable <id>", "warning");
ctx.ui.notify("Usage: /extensions disable <id>", "warning");
return;
}
const manifests = discoverManifests();
const manifest = manifests.get(id) ?? null;
if (!manifests.has(id)) {
ctx.ui.notify(
`Extension "${id}" not found. Run /sf extensions list to see available extensions.`,
`Extension "${id}" not found. Run /extensions list to see available extensions.`,
"warning",
);
return;
@ -252,7 +252,7 @@ function handleDisable(id, reason, ctx) {
}
function handleInfo(id, ctx) {
if (!id) {
ctx.ui.notify("Usage: /sf extensions info <id>", "warning");
ctx.ui.notify("Usage: /extensions info <id>", "warning");
return;
}
const manifests = discoverManifests();

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf extract-learnings
* SF Command /extract-learnings
*
* Analyses completed milestone artefacts and dispatches an LLM turn that
* extracts structured knowledge into 4 categories:
@ -171,7 +171,7 @@ export async function handleExtractLearnings(args, ctx, pi) {
const { milestoneId } = parseExtractLearningsArgs(args);
if (!milestoneId) {
ctx.ui.notify(
"Usage: /sf extract-learnings <milestoneId> (e.g. M001)",
"Usage: /extract-learnings <milestoneId> (e.g. M001)",
"warning",
);
return;
@ -235,14 +235,14 @@ export async function handleExtractLearnings(args, ctx, pi) {
}
/**
* Canonical structured-extraction instructions, shared by the manual
* `/sf extract-learnings` path and the autonomous mode complete-milestone turn.
* `/extract-learnings` path and the autonomous mode complete-milestone turn.
*/
export function buildExtractionStepsBlock(ctx) {
return `## Structured Learnings Extraction
Perform the following steps IN ORDER. Each step is mandatory unless explicitly
marked optional. These instructions are the single source of truth shared by
\`/sf extract-learnings\` and the autonomous mode milestone-completion turn.
\`/extract-learnings\` and the autonomous mode milestone-completion turn.
### Step 1 Classify findings into four categories

View file

@ -259,7 +259,7 @@ export async function handleSkillHealth(args, ctx) {
formatSkillDetail,
} = await import("./skill-health.js");
const basePath = projectRoot();
// /sf skill-health <skill-name> — detail view
// /skill-health <skill-name> — detail view
if (args && !args.startsWith("--")) {
const detail = formatSkillDetail(basePath, args);
ctx.ui.notify(detail, "info");
@ -288,7 +288,7 @@ export async function handleCapture(args, ctx) {
// Strip surrounding quotes from the argument
let text = args.trim();
if (!text) {
ctx.ui.notify('Usage: /sf capture "your thought here"', "warning");
ctx.ui.notify('Usage: /capture "your thought here"', "warning");
return;
}
// Remove wrapping quotes (single or double)
@ -299,7 +299,7 @@ export async function handleCapture(args, ctx) {
text = text.slice(1, -1);
}
if (!text) {
ctx.ui.notify('Usage: /sf capture "your thought here"', "warning");
ctx.ui.notify('Usage: /capture "your thought here"', "warning");
return;
}
const basePath = process.cwd();
@ -474,14 +474,14 @@ export async function handleKnowledge(args, ctx) {
const typeArg = parts[0]?.toLowerCase();
if (!typeArg || !["rule", "pattern", "lesson"].includes(typeArg)) {
ctx.ui.notify(
"Usage: /sf knowledge <rule|pattern|lesson> <description>\nExample: /sf knowledge rule Use real DB for integration tests",
"Usage: /knowledge <rule|pattern|lesson> <description>\nExample: /knowledge rule Use real DB for integration tests",
"warning",
);
return;
}
const entryText = parts.slice(1).join(" ").trim();
if (!entryText) {
ctx.ui.notify(`Usage: /sf knowledge ${typeArg} <description>`, "warning");
ctx.ui.notify(`Usage: /knowledge ${typeArg} <description>`, "warning");
return;
}
const type = typeArg;
@ -497,7 +497,7 @@ export async function handleRunHook(args, ctx, pi) {
const parts = args.trim().split(/\s+/);
if (parts.length < 3) {
ctx.ui.notify(
`Usage: /sf run-hook <hook-name> <unit-type> <unit-id>
`Usage: /run-hook <hook-name> <unit-type> <unit-id>
Unit types:
execute-task - Task execution (unit-id: M001/S01/T01)
@ -507,8 +507,8 @@ Unit types:
complete-milestone - Milestone completion (unit-id: M001)
Examples:
/sf run-hook code-review execute-task M001/S01/T01
/sf run-hook lint-check plan-slice M001/S01`,
/run-hook code-review execute-task M001/S01/T01
/run-hook lint-check plan-slice M001/S01`,
"warning",
);
return;
@ -627,7 +627,7 @@ export async function handleUpdate(ctx, deps = {}) {
? reloadError.message
: String(reloadError);
ctx.ui.notify(
`Updated to v${latest}, but automatic reload failed: ${message}. Use /sf reload to resume with the new version.`,
`Updated to v${latest}, but automatic reload failed: ${message}. Use /reload to resume with the new version.`,
"warning",
);
}

View file

@ -46,7 +46,7 @@ function formatProfileSummary(profile) {
"Runtime observation boundary:",
"- Profile state was stored only in .sf runtime state.",
"- No repo-committable artifact was written by profiling.",
"- Use /sf harness promote <finding-id> after review to create a tracked docs artifact.",
"- Use /harness promote <finding-id> after review to create a tracked docs artifact.",
"- Untracked files remain observed_only; SF did not stage or adopt them.",
].join("\n");
}
@ -56,7 +56,7 @@ function formatProfileSummary(profile) {
* Purpose: keep promotion artifacts deterministic while preventing path
* traversal through user-provided finding IDs.
*
* Consumer: `/sf harness promote <finding-id>`.
* Consumer: `/harness promote <finding-id>`.
*/
function findingIdSlug(findingId) {
const slug = findingId
@ -73,7 +73,7 @@ function findingIdSlug(findingId) {
* Purpose: promotion must be a writeback from recorded observations, not a new
* profiler run that can observe its own artifact or introduce timestamps.
*
* Consumer: `/sf harness promote <finding-id>`.
* Consumer: `/harness promote <finding-id>`.
*/
function parseRecordedProfile(profileJson) {
try {
@ -97,7 +97,7 @@ function parseRecordedProfile(profileJson) {
* Purpose: document the recorded observation facts without leaking absolute
* runtime paths or adding promotion-time fields.
*
* Consumer: `/sf harness promote <finding-id>`.
* Consumer: `/harness promote <finding-id>`.
*/
function profilePromotionPayload(profile, fallback) {
return {
@ -124,18 +124,18 @@ function profilePromotionPayload(profile, fallback) {
* Purpose: satisfy AC1 of sf-moocr4rv-au7r3l harness findings must be
* promotable into tracked docs with deterministic path and content.
*
* Consumer: `/sf harness promote <finding-id>` command.
* Consumer: `/harness promote <finding-id>` command.
*/
export async function handleHarnessPromote(findingId, ctx) {
const basePath = projectRoot();
const opened = await ensureDbOpen(basePath);
if (!opened) {
ctx.ui.notify("No SF database available. Run /sf init first.", "warning");
ctx.ui.notify("No SF database available. Run /init first.", "warning");
return;
}
if (!findingId || findingId.trim().length === 0) {
ctx.ui.notify(
"Usage: /sf harness promote <finding-id>\nPromotes a harness observation to a tracked docs artifact.",
"Usage: /harness promote <finding-id>\nPromotes a harness observation to a tracked docs artifact.",
"warning",
);
return;
@ -144,7 +144,7 @@ export async function handleHarnessPromote(findingId, ctx) {
const latestProfile = getLatestRepoProfile();
if (!latestProfile) {
ctx.ui.notify(
"No recorded harness profile found. Run /sf harness profile first; promotion writes tracked docs only from .sf runtime observations.",
"No recorded harness profile found. Run /harness profile first; promotion writes tracked docs only from .sf runtime observations.",
"warning",
);
return;
@ -215,7 +215,7 @@ export async function handleHarnessPromote(findingId, ctx) {
* Purpose: give users and future auto-flow slices an explicit entry point for
* harness evolution's read-only observation phase.
*
* Consumer: `/sf harness profile` command.
* Consumer: `/harness profile` command.
*/
export async function handleHarness(args, ctx) {
const subcommand = args.trim() || "profile";
@ -226,7 +226,7 @@ export async function handleHarness(args, ctx) {
}
if (!["profile", "snapshot", "status"].includes(subcommand)) {
ctx.ui.notify(
"Usage: /sf harness profile | /sf harness promote <finding-id>\nRecords a read-only .sf runtime profile or promotes a reviewed finding to tracked docs.",
"Usage: /harness profile | /harness promote <finding-id>\nRecords a read-only .sf runtime profile or promotes a reviewed finding to tracked docs.",
"warning",
);
return;
@ -234,7 +234,7 @@ export async function handleHarness(args, ctx) {
const basePath = projectRoot();
const opened = await ensureDbOpen(basePath);
if (!opened) {
ctx.ui.notify("No SF database available. Run /sf init first.", "warning");
ctx.ui.notify("No SF database available. Run /init first.", "warning");
return;
}
const profile = profileRepository(basePath);

View file

@ -35,7 +35,7 @@ export async function handleInspect(ctx) {
const { isDbAvailable, _getAdapter } = await import("./sf-db.js");
if (!(await ensureDbOpen(process.cwd())) || !isDbAvailable()) {
ctx.ui.notify(
"No SF database available. Run /sf autonomous to create one.",
"No SF database available. Run /autonomous to create one.",
"info",
);
return;
@ -43,7 +43,7 @@ export async function handleInspect(ctx) {
const adapter = _getAdapter();
if (!adapter) {
ctx.ui.notify(
"No SF database available. Run /sf autonomous to create one.",
"No SF database available. Run /autonomous to create one.",
"info",
);
return;
@ -83,7 +83,7 @@ export async function handleInspect(ctx) {
};
ctx.ui.notify(formatInspectOutput(data), "info");
} catch (err) {
logWarning("command", `/sf inspect failed: ${getErrorMessage(err)}`);
logWarning("command", `/inspect failed: ${getErrorMessage(err)}`);
ctx.ui.notify(
"Failed to inspect SF database. Check stderr for details.",
"error",

View file

@ -1,13 +1,13 @@
/**
* /sf logs Browse activity logs, debug logs, and metrics.
* /logs Browse activity logs, debug logs, and metrics.
*
* Subcommands:
* /sf logs List recent activity + debug logs
* /sf logs <N> Show summary of activity log #N
* /sf logs debug List debug log files
* /sf logs debug <N> Show debug log summary #N
* /sf logs tail [N] Show last N activity log entries (default 5)
* /sf logs clear Remove old activity and debug logs
* /logs List recent activity + debug logs
* /logs <N> Show summary of activity log #N
* /logs debug List debug log files
* /logs debug <N> Show debug log summary #N
* /logs tail [N] Show last N activity log entries (default 5)
* /logs clear Remove old activity and debug logs
*/
import {
existsSync,
@ -244,35 +244,35 @@ export async function handleLogs(args, ctx) {
const basePath = process.cwd();
const parts = args.trim().split(/\s+/).filter(Boolean);
const subCmd = parts[0] ?? "";
// /sf logs clear
// /logs clear
if (subCmd === "clear") {
await handleLogsClear(basePath, ctx);
return;
}
// /sf logs debug [N]
// /logs debug [N]
if (subCmd === "debug") {
const idx = parts[1] ? parseInt(parts[1], 10) : undefined;
await handleLogsDebug(basePath, ctx, idx);
return;
}
// /sf logs tail [N]
// /logs tail [N]
if (subCmd === "tail") {
const count = parts[1] ? parseInt(parts[1], 10) : 5;
await handleLogsTail(basePath, ctx, count);
return;
}
// /sf logs current — show active unit from auto.lock
// /logs current — show active unit from auto.lock
if (subCmd === "current") {
await handleLogsCurrent(basePath, ctx);
return;
}
// /sf logs <N> — show specific activity log
// /logs <N> — show specific activity log
if (subCmd && /^\d+$/.test(subCmd)) {
const seq = parseInt(subCmd, 10);
await handleLogsShow(basePath, ctx, seq);
return;
}
// /sf logs — list overview
// /logs — list overview
await handleLogsList(basePath, ctx);
}
// ─── Subcommand Handlers ────────────────────────────────────────────────────
@ -305,8 +305,8 @@ async function handleLogsList(basePath, ctx) {
lines.push(` ... and ${activities.length - 15} older entries`);
}
lines.push("");
lines.push(" View details: /sf logs <#>");
lines.push(" Active unit: /sf logs current");
lines.push(" View details: /logs <#>");
lines.push(" Active unit: /logs current");
}
if (debugLogs.length > 0) {
lines.push("");
@ -318,7 +318,7 @@ async function handleLogsList(basePath, ctx) {
lines.push(` ${i + 1}. ${d.filename} ${size} ${age}`);
}
lines.push("");
lines.push(" View details: /sf logs debug <#>");
lines.push(" View details: /logs debug <#>");
}
// Metrics summary
const metricsPath = join(sfRoot(basePath), "metrics.json");
@ -341,7 +341,7 @@ async function handleLogsList(basePath, ctx) {
);
}
lines.push("");
lines.push("Tip: Enable debug logging with SF_DEBUG=1 before /sf autonomous");
lines.push("Tip: Enable debug logging with SF_DEBUG=1 before /autonomous");
ctx.ui.notify(lines.join("\n"), "info");
}
async function handleLogsShow(basePath, ctx, seq) {
@ -349,7 +349,7 @@ async function handleLogsShow(basePath, ctx, seq) {
const entry = activities.find((e) => e.seq === seq);
if (!entry) {
ctx.ui.notify(
`Activity log #${seq} not found. Run /sf logs to see available logs.`,
`Activity log #${seq} not found. Run /logs to see available logs.`,
"warning",
);
return;
@ -421,7 +421,7 @@ async function handleLogsDebug(basePath, ctx, idx) {
);
}
lines.push("");
lines.push("View details: /sf logs debug <#>");
lines.push("View details: /logs debug <#>");
ctx.ui.notify(lines.join("\n"), "info");
return;
}
@ -571,7 +571,7 @@ async function handleLogsCurrent(basePath, ctx) {
);
if (!sessionExists) {
lines.push(
"Recommendation: Check .sf/runtime/ for error markers or run /sf doctor.",
"Recommendation: Check .sf/runtime/ for error markers or run /doctor.",
);
}
}

View file

@ -147,7 +147,7 @@ export async function handleCleanupBranches(ctx, basePath) {
export async function handleCleanupSnapshots(ctx, basePath) {
let refs;
try {
refs = nativeForEachRef(basePath, "refs/sf/snapshots/");
refs = nativeForEachRef(basePath, "refs/next/snapshots/");
} catch (e) {
logWarning("command", `snapshot ref list failed: ${e.message}`);
ctx.ui.notify("No snapshot refs to clean up.", "info");
@ -270,7 +270,7 @@ export async function handleCleanupWorktrees(ctx, basePath) {
export async function handleSkip(unitArg, ctx, basePath) {
if (!unitArg) {
ctx.ui.notify(
"Usage: /sf skip <unit-id> (e.g., /sf skip execute-task/M001/S01/T03 or /sf skip T03)",
"Usage: /skip <unit-id> (e.g., /skip execute-task/M001/S01/T03 or /skip T03)",
"info",
);
return;
@ -509,7 +509,7 @@ export async function handleCleanupProjects(args, ctx) {
}
if (!fix && orphaned.length > 0) {
lines.push(
`Run /sf cleanup projects --fix to permanently delete ${pl(orphaned.length, "orphaned director")}${orphaned.length === 1 ? "y" : "ies"}.`,
`Run /cleanup projects --fix to permanently delete ${pl(orphaned.length, "orphaned director")}${orphaned.length === 1 ? "y" : "ies"}.`,
);
ctx.ui.notify(lines.join("\n"), "warning");
return;

View file

@ -1,12 +1,12 @@
/**
* MCP Status `/sf mcp` command handler.
* MCP Status `/mcp` command handler.
*
* Shows configured MCP servers, their connection status, and available tools.
*
* Subcommands:
* /sf mcp Overview of all servers (alias: /sf mcp status)
* /sf mcp status Same as bare /sf mcp
* /sf mcp check <srv> Detailed status for a specific server
* /mcp Overview of all servers (alias: /mcp status)
* /mcp status Same as bare /mcp
* /mcp check <srv> Detailed status for a specific server
*/
import { existsSync, readFileSync } from "node:fs";
import { join } from "node:path";
@ -69,7 +69,7 @@ export function formatMcpStatusReport(servers) {
lines.push(` ${icon} ${s.name} (${s.transport}) — ${status}`);
}
lines.push("");
lines.push("Use /sf mcp check <server> for details on a specific server.");
lines.push("Use /mcp check <server> for details on a specific server.");
lines.push("Use mcp_discover to connect and list tools for a server.");
return lines.join("\n");
}
@ -100,7 +100,7 @@ export function formatMcpServerDetail(server) {
}
// ─── Command handler ────────────────────────────────────────────────────────
/**
* Handle `/sf mcp [status|check <server>]`.
* Handle `/mcp [status|check <server>]`.
*/
export async function handleMcpStatus(args, ctx) {
const trimmed = args.trim();
@ -115,7 +115,7 @@ export async function handleMcpStatus(args, ctx) {
);
return;
}
// /sf mcp check <server>
// /mcp check <server>
if (lowered.startsWith("check ")) {
const serverName = trimmed.slice("check ".length).trim();
const config = configs.find((c) => c.name === serverName);
@ -157,7 +157,7 @@ export async function handleMcpStatus(args, ctx) {
);
return;
}
// /sf mcp or /sf mcp status
// /mcp or /mcp status
if (!lowered || lowered === "status") {
// Build status for each server
const statuses = [];
@ -190,7 +190,7 @@ export async function handleMcpStatus(args, ctx) {
}
// Unknown subcommand
ctx.ui.notify(
"Usage: /sf mcp [status|check <server>]\n\n" +
"Usage: /mcp [status|check <server>]\n\n" +
" status Show all MCP server statuses (default)\n" +
" check <server> Detailed status for a specific server",
"warning",

View file

@ -1,5 +1,5 @@
/**
* SF Command `/sf memory`
* SF Command `/memory`
*
* Subcommands:
* list show recent active memories
@ -97,7 +97,7 @@ function truncate(text, max) {
// ─── Handler ────────────────────────────────────────────────────────────────
export async function handleMemory(args, ctx, pi) {
const parsed = parseArgs(args);
// `/sf memory` or `/sf memory help`
// `/memory` or `/memory help`
if (parsed.sub === "" || parsed.sub === "help") {
ctx.ui.notify(usage(), "info");
return;
@ -160,7 +160,7 @@ export async function handleMemory(args, ctx, pi) {
}
function usage() {
return [
"Usage: /sf memory <subcommand>",
"Usage: /memory <subcommand>",
" list list recent active memories",
' search "<query>" embedding-ranked search (gateway-aware; static fallback)',
" show <MEM###> print one memory",
@ -208,7 +208,7 @@ async function handleSearch(ctx, parsed) {
const query = parsed.positional.join(" ").trim();
if (!query) {
ctx.ui.notify(
'Usage: /sf memory search "<query>" (uses embeddings when SF_LLM_GATEWAY_KEY is set; static fallback otherwise)',
'Usage: /memory search "<query>" (uses embeddings when SF_LLM_GATEWAY_KEY is set; static fallback otherwise)',
"warning",
);
return;
@ -240,7 +240,7 @@ async function handleSearch(ctx, parsed) {
}
function handleShow(ctx, id) {
if (!id) {
ctx.ui.notify("Usage: /sf memory show <MEM###>", "warning");
ctx.ui.notify("Usage: /memory show <MEM###>", "warning");
return;
}
const adapter = _getAdapter();
@ -277,7 +277,7 @@ function handleShow(ctx, id) {
}
function handleForget(ctx, id) {
if (!id) {
ctx.ui.notify("Usage: /sf memory forget <MEM###>", "warning");
ctx.ui.notify("Usage: /memory forget <MEM###>", "warning");
return;
}
const ok = supersedeMemory(id, "CAP_EXCEEDED");
@ -542,7 +542,7 @@ function sanitizeProbeError(error) {
}
function handleExport(ctx, target) {
if (!target) {
ctx.ui.notify("Usage: /sf memory export <path.json>", "warning");
ctx.ui.notify("Usage: /memory export <path.json>", "warning");
return;
}
try {
@ -585,7 +585,7 @@ function handleExport(ctx, target) {
}
function handleImport(ctx, target) {
if (!target) {
ctx.ui.notify("Usage: /sf memory import <path.json>", "warning");
ctx.ui.notify("Usage: /memory import <path.json>", "warning");
return;
}
try {
@ -624,7 +624,7 @@ function handleDecay(ctx) {
function handleCap(ctx, arg) {
const max = arg ? Number.parseInt(arg, 10) : 50;
if (!Number.isFinite(max) || max < 1) {
ctx.ui.notify("Usage: /sf memory cap <max> (default 50)", "warning");
ctx.ui.notify("Usage: /memory cap <max> (default 50)", "warning");
return;
}
enforceMemoryCap(max);
@ -634,7 +634,7 @@ function handleSources(ctx) {
const sources = listMemorySources(30);
if (sources.length === 0) {
ctx.ui.notify(
"No memory sources yet. Use `/sf memory ingest <path|url>` to add one.",
"No memory sources yet. Use `/memory ingest <path|url>` to add one.",
"info",
);
return;
@ -648,7 +648,7 @@ function handleSources(ctx) {
async function handleNote(ctx, args) {
const text = args.positional.join(" ").trim();
if (!text) {
ctx.ui.notify('Usage: /sf memory note "your note"', "warning");
ctx.ui.notify('Usage: /memory note "your note"', "warning");
return;
}
try {
@ -666,7 +666,7 @@ async function handleIngest(ctx, args) {
const target = args.positional[0];
if (!target) {
ctx.ui.notify(
"Usage: /sf memory ingest <path|url> [--tag a,b] [--scope project|global]",
"Usage: /memory ingest <path|url> [--tag a,b] [--scope project|global]",
"warning",
);
return;
@ -687,7 +687,7 @@ async function handleIngest(ctx, args) {
ctx.ui.notify(summarizeIngest(result), "info");
if (args.extract && result.sourceId) {
ctx.ui.notify(
`(Use \`/sf memory extract ${result.sourceId}\` to trigger extraction manually.)`,
`(Use \`/memory extract ${result.sourceId}\` to trigger extraction manually.)`,
"info",
);
}
@ -697,7 +697,7 @@ async function handleIngest(ctx, args) {
}
function handleExtractSource(ctx, pi, id) {
if (!id) {
ctx.ui.notify("Usage: /sf memory extract <SRC-xxx>", "warning");
ctx.ui.notify("Usage: /memory extract <SRC-xxx>", "warning");
return;
}
const source = getMemorySource(id);

View file

@ -5,7 +5,7 @@
* ~/.sf/projects/<hash>/ into the repo (promote), plus visibility (list)
* and comparison (diff) companions.
*
* Consumer: SF ops handler (commands/handlers/ops.js) via `/sf plan <subcmd>`.
* Consumer: SF ops handler (commands/handlers/ops.js) via `/plan <subcmd>`.
*/
import { spawnSync } from "node:child_process";
@ -202,7 +202,7 @@ export async function handlePlanPromote(args, ctx) {
if (!source) {
ctx.ui.notify(
"Usage: /sf plan promote <source> [--to <dir>] [--rename <name>] [--edit]",
"Usage: /plan promote <source> [--to <dir>] [--rename <name>] [--edit]",
"warning",
);
return;
@ -321,7 +321,7 @@ export async function handlePlanList(_args, ctx) {
export async function handlePlanDiff(args, ctx) {
const source = args.trim();
if (!source) {
ctx.ui.notify("Usage: /sf plan diff <source>", "warning");
ctx.ui.notify("Usage: /plan diff <source>", "warning");
return;
}
@ -392,7 +392,7 @@ export async function handlePlanDiff(args, ctx) {
* Purpose: make docs/specs reproducible human exports from SF's DB-first
* working model instead of unmanaged snapshots.
*
* Consumer: handlePlan router for `/sf plan specs ...`.
* Consumer: handlePlan router for `/plan specs ...`.
*/
export async function handlePlanSpecs(args, ctx) {
const subcmd = args.trim() || "diff";
@ -448,7 +448,7 @@ export async function handlePlanSpecs(args, ctx) {
return { ok: true, changed };
}
ctx.ui.notify(
`Spec exports are stale:\n${changed.map((p) => ` ${p}`).join("\n")}\nRun /sf plan specs generate.`,
`Spec exports are stale:\n${changed.map((p) => ` ${p}`).join("\n")}\nRun /plan specs generate.`,
"error",
);
return { ok: false, changed };
@ -464,7 +464,7 @@ export async function handlePlanSpecs(args, ctx) {
return { ok: changed.length === 0, changed };
}
ctx.ui.notify("Usage: /sf plan specs generate|diff|check", "warning");
ctx.ui.notify("Usage: /plan specs generate|diff|check", "warning");
return { ok: false, changed: [] };
}
@ -489,7 +489,7 @@ export async function handlePlan(args, ctx) {
return true;
}
if (trimmed === "") {
ctx.ui.notify("Usage: /sf plan promote|list|diff|specs ...", "info");
ctx.ui.notify("Usage: /plan promote|list|diff|specs ...", "info");
return true;
}
return false;

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf pr-branch
* SF Command /pr-branch
*
* Creates a clean PR branch by cherry-picking commits while stripping
* any changes to .sf/, .planning/, and PLAN.md paths. Useful for

View file

@ -115,7 +115,7 @@ export async function handlePrefs(args, ctx) {
return;
}
ctx.ui.notify(
"Usage: /sf prefs [global|project|status|wizard|setup|import-claude [global|project]]",
"Usage: /prefs [global|project|status|wizard|setup|import-claude [global|project]]",
"info",
);
}

View file

@ -1,5 +1,5 @@
/**
* /sf rate Submit feedback on the last unit's model tier assignment.
* /rate Submit feedback on the last unit's model tier assignment.
* Feeds into the adaptive routing history so future dispatches improve.
*/
import { loadLedgerFromDisk } from "./metrics.js";
@ -10,7 +10,7 @@ export async function handleRate(args, ctx, basePath) {
const rating = args.trim().toLowerCase();
if (!rating || !VALID_RATINGS.has(rating)) {
ctx.ui.notify(
"Usage: /sf rate <over|ok|under>\n" +
"Usage: /rate <over|ok|under>\n" +
" over — model was overpowered for that task (encourage cheaper)\n" +
" ok — model was appropriate\n" +
" under — model was too weak (encourage stronger)",

View file

@ -1,5 +1,5 @@
/**
* commands-scaffold-sync.ts `/sf scaffold sync` (ADR-021 Phase E).
* commands-scaffold-sync.ts `/scaffold sync` (ADR-021 Phase E).
*
* Manual escape hatch over the Phase C automatic scaffold sync. Lets the user:
* - Inspect drift without modifying anything (`--dry-run`).
@ -15,7 +15,7 @@
import { ensureAgenticDocsScaffold } from "./agentic-docs-scaffold.js";
import { projectRoot } from "./commands/context.js";
import { detectScaffoldDrift } from "./scaffold-drift.js";
/** Parse the args string for `/sf scaffold sync`. Tolerates extra whitespace. */
/** Parse the args string for `/scaffold sync`. Tolerates extra whitespace. */
export function parseScaffoldSyncArgs(args) {
const trimmed = (args || "").trim();
const tokens = trimmed.length > 0 ? trimmed.split(/\s+/) : [];
@ -144,7 +144,7 @@ async function tryLoadScaffoldKeeper() {
return null;
}
/**
* Top-level handler for `/sf scaffold sync [args]`.
* Top-level handler for `/scaffold sync [args]`.
*
* Always notifies via `ctx.ui.notify` never throws on the sync paths
* themselves; underlying calls (`ensureAgenticDocsScaffold`,

View file

@ -1,17 +1,17 @@
/**
* SF Command /sf scan
* SF Command /scan
*
* Rapid codebase assessment lightweight alternative to /sf map-codebase.
* Rapid codebase assessment lightweight alternative to /map-codebase.
* Spawns one focused AI analysis pass and writes structured documents to
* .sf/codebase/ for use by planning and execution phases.
*
* Usage:
* /sf scan tech+arch focus (default)
* /sf scan --focus tech technology stack + integrations only
* /sf scan --focus arch architecture + structure only
* /sf scan --focus quality conventions + testing patterns only
* /sf scan --focus concerns technical debt + concerns only
* /sf scan --focus tech+arch explicit default (same as no flag)
* /scan tech+arch focus (default)
* /scan --focus tech technology stack + integrations only
* /scan --focus arch architecture + structure only
* /scan --focus quality conventions + testing patterns only
* /scan --focus concerns technical debt + concerns only
* /scan --focus tech+arch explicit default (same as no flag)
*/
import { existsSync, mkdirSync } from "node:fs";
import { join, relative } from "node:path";

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf schedule
* SF Command /schedule
*
* Schedule management: add, list, done, cancel, snooze, run.
* Entries are stored in SQLite (`schedule_entries`). Legacy schedule JSONL is
@ -240,7 +240,7 @@ async function addItem(args, ctx) {
}
if (!dueAt) {
ctx.ui.notify(
"Usage: /sf schedule add --in <duration> <title>\n /sf schedule add --at <ISO-date> <title>",
"Usage: /schedule add --in <duration> <title>\n /schedule add --at <ISO-date> <title>",
"warning",
);
return;
@ -250,7 +250,7 @@ async function addItem(args, ctx) {
kind === "command" ? _commandFromParts(titleParts) : _joinPlain(titleParts);
if (!title) {
ctx.ui.notify(
"Missing title. Example: /sf schedule add --in 2w 'Review adoption metrics'",
"Missing title. Example: /schedule add --in 2w 'Review adoption metrics'",
"warning",
);
return;
@ -358,7 +358,7 @@ async function listItems(args, ctx) {
async function markDone(args, ctx) {
const idPrefix = _joinPlain(_splitArgs(args));
if (!idPrefix) {
ctx.ui.notify("Usage: /sf schedule done \u003cid\u003e", "warning");
ctx.ui.notify("Usage: /schedule done \u003cid\u003e", "warning");
return;
}
const store = createScheduleStore(_basePath());
@ -379,7 +379,7 @@ async function markDone(args, ctx) {
async function markCancel(args, ctx) {
const idPrefix = _joinPlain(_splitArgs(args));
if (!idPrefix) {
ctx.ui.notify("Usage: /sf schedule cancel \u003cid\u003e", "warning");
ctx.ui.notify("Usage: /schedule cancel \u003cid\u003e", "warning");
return;
}
const store = createScheduleStore(_basePath());
@ -412,7 +412,7 @@ async function snoozeItem(args, ctx) {
if (!idPrefix || !by) {
ctx.ui.notify(
"Usage: /sf schedule snooze \u003cid\u003e --by \u003cduration\u003e",
"Usage: /schedule snooze \u003cid\u003e --by \u003cduration\u003e",
"warning",
);
return;
@ -460,10 +460,7 @@ async function runItem(args, ctx) {
if (!idPrefix) idPrefix = part;
}
if (!idPrefix) {
ctx.ui.notify(
"Usage: /sf schedule run [--dry-run] \u003cid\u003e",
"warning",
);
ctx.ui.notify("Usage: /schedule run [--dry-run] \u003cid\u003e", "warning");
return;
}
const store = createScheduleStore(_basePath());
@ -537,7 +534,7 @@ async function runItem(args, ctx) {
// ─── Public handler ─────────────────────────────────────────────────────────
/**
* Handle /sf schedule subcommands.
* Handle /schedule subcommands.
*
* Purpose: route schedule CLI input to the appropriate subcommand.
*
@ -566,7 +563,7 @@ export async function handleSchedule(args, ctx) {
return runItem(rest, ctx);
case "":
ctx.ui.notify(
"Usage: /sf schedule add|list|done|cancel|snooze|run\n" +
"Usage: /schedule add|list|done|cancel|snooze|run\n" +
" add --in \u003cduration\u003e [--kind \u003ckind\u003e] [--scope \u003cscope\u003e] [--autonomous-dispatch] \u003ctitle-or-command\u003e\n" +
" list [--due] [--all] [--json] [--scope \u003cscope\u003e]\n" +
" done \u003cid\u003e\n" +
@ -578,7 +575,7 @@ export async function handleSchedule(args, ctx) {
return;
default:
ctx.ui.notify(
`Unknown schedule subcommand: ${sub}. Use /sf schedule for usage.`,
`Unknown schedule subcommand: ${sub}. Use /schedule for usage.`,
"warning",
);
}

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf session-report
* SF Command /session-report
*
* Summarizes the current session: tasks completed, cost, tokens,
* duration, model usage breakdown.

View file

@ -1,5 +1,5 @@
/**
* SF Command /sf ship
* SF Command /ship
*
* Creates a PR from milestone artifacts: generates title + body from
* roadmap, slice summaries, and metrics, then opens via `gh pr create`.

View file

@ -5,7 +5,7 @@
* docs, test, and implementation artifacts without treating raw notes as
* approved runtime behavior.
*
* Consumer: `/sf todo triage` command.
* Consumer: `/todo triage` command.
*/
import { createHash } from "node:crypto";
@ -603,7 +603,7 @@ export async function handleTodo(args, ctx, _pi) {
const ci = parts.includes("--ci");
if (subcommand !== "triage") {
ctx.ui.notify(
"Usage: /sf todo triage [--no-clear] [--backlog] [--ci]\nReads root TODO.md, writes .sf/triage artifacts, and clears processed dump notes by default.",
"Usage: /todo triage [--no-clear] [--backlog] [--ci]\nReads root TODO.md, writes .sf/triage artifacts, and clears processed dump notes by default.",
"warning",
);
return;

View file

@ -207,7 +207,7 @@ export async function handleUok(args, ctx) {
const trimmed = args.trim();
if (trimmed === "help" || trimmed === "--help") {
ctx.ui.notify(
"Usage: /sf uok [status|metrics|circuit-breakers|gates|messages|--json]\n\n status — UOK ledger health, last run, last error, historical drift, startup gate, and gate health\n metrics — Render Prometheus-format metrics to .sf/runtime/uok-metrics.prom and display\n circuit-breakers — List all circuit breaker states and failure streaks\n gates — List observed gate runs and circuit breaker state\n messages — Show message bus status\n --json — Same as status but outputs JSON",
"Usage: /uok [status|metrics|circuit-breakers|gates|messages|--json]\n\n status — UOK ledger health, last run, last error, historical drift, startup gate, and gate health\n metrics — Render Prometheus-format metrics to .sf/runtime/uok-metrics.prom and display\n circuit-breakers — List all circuit breaker states and failure streaks\n gates — List observed gate runs and circuit breaker state\n messages — Show message bus status\n --json — Same as status but outputs JSON",
"info",
);
return;
@ -297,7 +297,7 @@ export async function handleUok(args, ctx) {
lines.push(`Unique conversations: ${m.uniqueConversations}`);
lines.push("");
lines.push(
"Tip: /sf uok messages compact — remove messages older than retention period",
"Tip: /uok messages compact — remove messages older than retention period",
);
ctx.ui.notify(lines.join("\n"), "info");
return;

View file

@ -1,7 +1,7 @@
/**
* SF Workflow Template Commands /sf start, /sf templates
* SF Workflow Template Commands /start, /templates
*
* Handles the `/sf start [template] [description]` and `/sf templates` commands.
* Handles the `/start [template] [description]` and `/templates` commands.
* Resolves templates by name or auto-detection, then dispatches the workflow prompt.
*/
import {
@ -177,10 +177,10 @@ function findInProgressWorkflows(basePath) {
results.sort((a, b) => b.updatedAt.localeCompare(a.updatedAt));
return results;
}
// ─── /sf start ──────────────────────────────────────────────────────────────
// ─── /start ──────────────────────────────────────────────────────────────
export async function handleStart(args, ctx, pi) {
const trimmed = args.trim();
// /sf start --list → same as /sf templates
// /start --list → same as /templates
if (trimmed === "--list" || trimmed === "list") {
ctx.ui.notify(listTemplates(), "info");
return;
@ -191,7 +191,7 @@ export async function handleStart(args, ctx, pi) {
if (isAutoActive()) {
ctx.ui.notify(
"Cannot start a workflow template while autonomous mode is running.\n" +
"Run /sf pause first, then /sf start.",
"Run /pause first, then /start.",
"warning",
);
return;
@ -199,12 +199,12 @@ export async function handleStart(args, ctx, pi) {
if (isAutoPaused()) {
ctx.ui.notify(
"Autonomous mode is paused. Starting a workflow template will run independently.\n" +
"The paused autonomous session can be resumed later with /sf autonomous.",
"The paused autonomous session can be resumed later with /autonomous.",
"info",
);
}
// ─── Resume detection ───────────────────────────────────────────────────
// /sf start --resume or /sf start resume → resume in-progress workflow
// /start --resume or /start resume → resume in-progress workflow
if (trimmed === "--resume" || trimmed === "resume") {
const basePath = process.cwd();
const inProgress = findInProgressWorkflows(basePath);
@ -265,7 +265,7 @@ export async function handleStart(args, ctx, pi) {
);
return;
}
// Show in-progress workflows when /sf start is called with no args
// Show in-progress workflows when /start is called with no args
if (!trimmed) {
const basePath = process.cwd();
const inProgress = findInProgressWorkflows(basePath);
@ -279,12 +279,12 @@ export async function handleStart(args, ctx, pi) {
`In-progress workflow found:\n` +
` ${wf.templateName}: "${wf.description}"\n` +
` Phase ${completedCount + 1}/${wf.phases.length}: ${activePhase?.name ?? "unknown"}\n\n` +
`Run /sf start resume to continue it.\n`,
`Run /start resume to continue it.\n`,
"info",
);
}
}
// /sf start --dry-run <template> → preview without executing
// /start --dry-run <template> → preview without executing
const dryRun = trimmed.includes("--dry-run");
const cleanedArgs = trimmed.replace(/--dry-run\s*/, "").trim();
// Parse: first word might be a template name, rest is description
@ -323,9 +323,9 @@ export async function handleStart(args, ctx, pi) {
} else if (detected.length > 1) {
const choices = detected
.slice(0, 4)
.map((m) => ` /sf start ${m.id} ${cleanedArgs}`);
.map((m) => ` /start ${m.id} ${cleanedArgs}`);
ctx.ui.notify(
`Multiple templates could match. Pick one:\n\n${choices.join("\n")}\n\nOr specify explicitly: /sf start <template> <description>`,
`Multiple templates could match. Pick one:\n\n${choices.join("\n")}\n\nOr specify explicitly: /start <template> <description>`,
"info",
);
return;
@ -337,7 +337,7 @@ export async function handleStart(args, ctx, pi) {
ctx.ui.notify(formatStartUsage(), "info");
} else {
ctx.ui.notify(
`No template matched "${firstWord}". Run /sf start to see available templates.`,
`No template matched "${firstWord}". Run /start to see available templates.`,
"warning",
);
}
@ -386,20 +386,20 @@ export async function handleStart(args, ctx, pi) {
if (templateId === "full-project") {
const root = sfRoot(basePath);
if (!existsSync(root)) {
ctx.ui.notify("Routing to /sf init for full project setup...", "info");
// Trigger /sf init by dispatching to the handler
ctx.ui.notify("Routing to /init for full project setup...", "info");
// Trigger /init by dispatching to the handler
pi.sendMessage(
{
customType: "sf-workflow-template",
content:
"The user wants to start a full SF project. Run `/sf init` to bootstrap the project, then `/sf autonomous` to begin execution.",
"The user wants to start a full SF project. Run `/init` to bootstrap the project, then `/autonomous` to begin execution.",
display: false,
},
{ triggerTurn: true },
);
} else {
ctx.ui.notify(
"Project already initialized. Use `/sf autonomous` to continue or `/sf discuss` to start a new milestone.",
"Project already initialized. Use `/autonomous` to continue or `/discuss` to start a new milestone.",
"info",
);
}
@ -487,10 +487,10 @@ export async function handleStart(args, ctx, pi) {
setActiveRunDir(runDir);
startAutoDetached(ctx, pi, basePath, false);
}
// ─── /sf templates ──────────────────────────────────────────────────────────
// ─── /templates ──────────────────────────────────────────────────────────
export async function handleTemplates(args, ctx) {
const trimmed = args.trim();
// /sf templates info <name>
// /templates info <name>
if (trimmed.startsWith("info ")) {
const name = trimmed.replace(/^info\s+/, "").trim();
const info = getTemplateInfo(name);
@ -498,17 +498,17 @@ export async function handleTemplates(args, ctx) {
ctx.ui.notify(info, "info");
} else {
ctx.ui.notify(
`Unknown template "${name}". Run /sf templates to see available templates.`,
`Unknown template "${name}". Run /templates to see available templates.`,
"warning",
);
}
return;
}
// /sf templates — list all
// /templates — list all
ctx.ui.notify(listTemplates(), "info");
}
/**
* Return template IDs for autocomplete in /sf templates info <name>.
* Return template IDs for autocomplete in /templates info <name>.
*/
export function getTemplateCompletions(prefix) {
try {

View file

@ -1,4 +1,4 @@
// SF — In-TUI handler for /sf worktree commands (list, merge, clean, remove).
// SF — In-TUI handler for /worktree commands (list, merge, clean, remove).
//
// Mirrors the CLI subcommands but emits results via ctx.ui.notify() instead
// of writing colored output to stderr. Reuses the same extension modules
@ -87,11 +87,11 @@ export function formatWorktreeList(statuses) {
lines.push("");
}
lines.push("Commands:");
lines.push(" /sf worktree merge <name> Merge into main and clean up");
lines.push(" /worktree merge <name> Merge into main and clean up");
lines.push(
" /sf worktree remove <name> Remove a worktree (--force to skip safety checks)",
" /worktree remove <name> Remove a worktree (--force to skip safety checks)",
);
lines.push(" /sf worktree clean Remove all merged/empty worktrees");
lines.push(" /worktree clean Remove all merged/empty worktrees");
return lines.join("\n");
}
export function formatCleanKeepReason(status) {
@ -125,7 +125,7 @@ async function handleMerge(args, ctx) {
} else {
const names = worktrees.map((w) => w.name).join(", ");
ctx.ui.notify(
`Usage: /sf worktree merge <name>\n\nWorktrees: ${names}`,
`Usage: /worktree merge <name>\n\nWorktrees: ${names}`,
"warning",
);
return;
@ -163,7 +163,7 @@ async function handleMerge(args, ctx) {
[
`Auto-commit before merge failed: ${msg}`,
"",
`Commit or stash changes in ${wt.path}, then re-run /sf worktree merge ${target}.`,
`Commit or stash changes in ${wt.path}, then re-run /worktree merge ${target}.`,
].join("\n"),
"error",
);
@ -179,12 +179,12 @@ async function handleMerge(args, ctx) {
const msg = err instanceof Error ? err.message : String(err);
if (err instanceof SFError && err.code === SF_GIT_ERROR) {
ctx.ui.notify(
`Merge requires the main branch to be checked out: ${msg}\n\nSwitch to ${mainBranch} (e.g. 'git checkout ${mainBranch}'), then re-run /sf worktree merge ${target}.`,
`Merge requires the main branch to be checked out: ${msg}\n\nSwitch to ${mainBranch} (e.g. 'git checkout ${mainBranch}'), then re-run /worktree merge ${target}.`,
"error",
);
} else {
ctx.ui.notify(
`Merge failed: ${msg}\n\nResolve conflicts manually, then run /sf worktree merge ${target} again.`,
`Merge failed: ${msg}\n\nResolve conflicts manually, then run /worktree merge ${target} again.`,
"error",
);
}
@ -205,8 +205,8 @@ async function handleMerge(args, ctx) {
"",
`Cleanup failed after the merge succeeded: ${msg}`,
err instanceof SFError && err.code === SF_GIT_ERROR
? `Switch to ${mainBranch} (e.g. 'git checkout ${mainBranch}'), then remove the worktree manually with /sf worktree remove ${target} --force.`
: `Remove the worktree manually with /sf worktree remove ${target} --force, or run 'git worktree prune' to clean up dangling registrations.`,
? `Switch to ${mainBranch} (e.g. 'git checkout ${mainBranch}'), then remove the worktree manually with /worktree remove ${target} --force.`
: `Remove the worktree manually with /worktree remove ${target} --force, or run 'git worktree prune' to clean up dangling registrations.`,
];
ctx.ui.notify(cleanupLines.join("\n"), "warning");
}
@ -256,7 +256,7 @@ async function handleRemove(args, ctx) {
const force = tokens.includes("--force");
const name = tokens.find((t) => t !== "--force");
if (!name) {
ctx.ui.notify("Usage: /sf worktree remove <name> [--force]", "warning");
ctx.ui.notify("Usage: /worktree remove <name> [--force]", "warning");
return;
}
const worktrees = listWorktrees(basePath);
@ -275,8 +275,8 @@ async function handleRemove(args, ctx) {
[
`Worktree "${name}" has pending changes (${formatCleanKeepReason(status)}).`,
"",
` Merge first: /sf worktree merge ${name}`,
` Or force-remove: /sf worktree remove ${name} --force`,
` Merge first: /worktree merge ${name}`,
` Or force-remove: /worktree remove ${name} --force`,
].join("\n"),
"warning",
);
@ -295,7 +295,7 @@ async function handleRemove(args, ctx) {
}
// ─── Help text ──────────────────────────────────────────────────────────────
const HELP_TEXT = [
"Usage: /sf worktree <command> [args]",
"Usage: /worktree <command> [args]",
"",
"Commands:",
" list Show all worktrees with status",

View file

@ -1,6 +1,6 @@
import { importExtensionModule } from "@singularity-forge/pi-coding-agent";
export { registerSFCommand } from "./commands/index.js";
export { registerSFCommand, registerSFCommands } from "./commands/index.js";
export async function handleSFCommand(...args) {
const { handleSFCommand: dispatch } = await importExtensionModule(
import.meta.url,

View file

@ -12,7 +12,35 @@ const sfHome = process.env.SF_HOME || join(homedir(), ".sf");
* Comprehensive description of all available SF commands for help text.
*/
export const SF_COMMAND_DESCRIPTION =
"SF — Singularity Forge: /sf help|start|templates|next|autonomous|stop|pause|reload|status|widget|visualize|queue|quick|discuss|capture|triage|todo|dispatch|history|undo|undo-task|reset-slice|rate|skip|export|cleanup|model|mode|show-config|prefs|config|keys|hooks|run-hook|skill-health|doctor|uok|logs|forensics|changelog|migrate|remote|steer|knowledge|harness|solver-eval|new-milestone|parallel|cmux|park|unpark|init|setup|inspect|extensions|update|fast|mcp|rethink|codebase|notifications|ship|do|session-report|backlog|pr-branch|add-tests|scan|scaffold|extract-learnings|eval-review|plan";
"SF — Singularity Forge: /help|start|templates|next|autonomous|pause|status|widget|visualize|queue|quick|discuss|capture|triage|todo|dispatch|history|undo|undo-task|reset-slice|rate|skip|cleanup|mode|show-config|prefs|config|keys|hooks|run-hook|skill-health|doctor|uok|logs|forensics|migrate|remote|steer|knowledge|harness|solver-eval|new-milestone|parallel|cmux|park|unpark|init|setup|inspect|extensions|update|fast|mcp|rethink|codebase|notifications|ship|do|session-report|backlog|pr-branch|add-tests|scan|scaffold|extract-learnings|eval-review|plan";
export const BASE_RUNTIME_COMMANDS = new Set([
"settings",
"model",
"scoped-models",
"export",
"share",
"copy",
"name",
"session",
"changelog",
"hotkeys",
"fork",
"tree",
"provider",
"login",
"logout",
"new",
"compact",
"resume",
"reload",
"thinking",
"edit-mode",
"terminal",
"stop",
"exit",
"quit",
]);
/**
* Top-level SF subcommands with descriptions.
*/
@ -26,7 +54,7 @@ export const TOP_LEVEL_SUBCOMMANDS = [
{ cmd: "stop", desc: "Stop autonomous mode gracefully" },
{
cmd: "pause",
desc: "Pause autonomous mode (preserves state, /sf autonomous to resume)",
desc: "Pause autonomous mode (preserves state, /autonomous to resume)",
},
{
cmd: "reload",
@ -42,7 +70,7 @@ export const TOP_LEVEL_SUBCOMMANDS = [
{ cmd: "quick", desc: "Execute a quick task without full planning overhead" },
{ cmd: "discuss", desc: "Discuss architecture and decisions" },
{ cmd: "capture", desc: "Fire-and-forget thought capture" },
{ cmd: "debug", desc: "Create and inspect persistent /sf debug sessions" },
{ cmd: "debug", desc: "Create and inspect persistent /debug sessions" },
{ cmd: "scan", desc: "Run source and project scans" },
{
cmd: "escalate",
@ -194,6 +222,14 @@ export const TOP_LEVEL_SUBCOMMANDS = [
desc: "Promote planning artifacts from ~/.sf/ to docs/ (promote, list, diff)",
},
];
export const DIRECT_SF_COMMANDS = TOP_LEVEL_SUBCOMMANDS.filter(
(command) => !BASE_RUNTIME_COMMANDS.has(command.cmd),
);
export const DIRECT_SF_COMMAND_NAMES = DIRECT_SF_COMMANDS.map(
(command) => command.cmd,
);
/**
* Nested subcommand definitions for multi-level completion.
*/
@ -605,3 +641,18 @@ export function getSfArgumentCompletions(prefix) {
}
return [];
}
export function getSfTopLevelCommandCompletions(command, prefix) {
const suffix = typeof prefix === "string" ? prefix : "";
const fullPrefix = suffix.length > 0 ? `${command} ${suffix}` : `${command} `;
const completions = getSfArgumentCompletions(fullPrefix) ?? [];
const commandPrefix = `${command} `;
return completions.map((completion) => ({
...completion,
value:
typeof completion.value === "string" &&
completion.value.startsWith(commandPrefix)
? completion.value.slice(commandPrefix.length)
: completion.value,
}));
}

View file

@ -49,7 +49,7 @@ export async function guardRemoteSession(ctx, _pi) {
if (process.env.SF_WEB_BRIDGE_TUI === "1") {
ctx.ui.notify(
`Another autonomous mode session (PID ${remote.pid}) is running on this project (${unitLabel}). ` +
`Stop it first with /sf stop, or use /sf steer to redirect it.`,
`Stop it first with /autonomous stop, or use /steer to redirect it.`,
"warning",
);
return false;
@ -71,7 +71,7 @@ export async function guardRemoteSession(ctx, _pi) {
id: "steer",
label: "Steer the session",
description:
"Use /sf steer <instruction> to redirect the running session.",
"Use /steer <instruction> to redirect the running session.",
},
{
id: "stop",
@ -84,7 +84,7 @@ export async function guardRemoteSession(ctx, _pi) {
description: "Start a new session, terminating the existing one.",
},
],
notYetMessage: "Run /sf when ready.",
notYetMessage: "Run /next when ready.",
});
if (choice === "status") {
await handleStatus(ctx);
@ -92,8 +92,8 @@ export async function guardRemoteSession(ctx, _pi) {
}
if (choice === "steer") {
ctx.ui.notify(
"Use /sf steer <instruction> to redirect the running autonomous mode session.\n" +
"Example: /sf steer Use Postgres instead of SQLite",
"Use /steer <instruction> to redirect the running autonomous mode session.\n" +
"Example: /steer Use Postgres instead of SQLite",
"info",
);
return false;

View file

@ -30,7 +30,7 @@ export async function handleSFCommand(args, ctx, pi) {
throw err;
}
ctx.ui.notify(
`Unknown: /sf ${trimmed}. Run /sf help for available commands.`,
`Unknown: /${trimmed}. Run /help for available commands.`,
"warning",
);
}

View file

@ -20,8 +20,8 @@ import { guardRemoteSession, projectRoot } from "../context.js";
/**
* Parse --yolo flag and optional file path from the autonomous command string.
* Supports: `/sf autonomous --yolo path/to/file.md` or
* `/sf autonomous -y path/to/file.md`.
* Supports: `/autonomous --yolo path/to/file.md` or
* `/autonomous -y path/to/file.md`.
*/
function parseYoloFlag(trimmed) {
const yoloRe = /(?:--yolo|-y)\s+("(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*'|\S+)/;
@ -53,7 +53,7 @@ export function parseMilestoneTarget(input) {
/**
* Dispatch entry point for the autonomous command family.
*
* Handles `/sf autonomous`, `/sf next`, `/sf stop`, `/sf pause`, and their flag
* Handles `/autonomous`, `/next`, `/autonomous stop`, `/pause`, and their flag
* variants.
* Returns `true` when the command was recognised and routed (caller stops
* searching), `false` when the command isn't autonomous-related.
@ -118,6 +118,14 @@ export async function handleAutonomousCommand(trimmed, ctx, pi) {
}
if (isAutonomousVerb) {
const autonomousArgsText = trimmed.replace(/^autonomous\b/, "").trim();
if (autonomousArgsText === "stop") {
await stopAutonomousRun(ctx, pi);
return true;
}
if (autonomousArgsText === "pause") {
await pauseAutonomousRun(ctx, pi);
return true;
}
const { yoloSeedFile, rest: afterYolo } = parseYoloFlag(autonomousArgsText);
const { milestoneId, rest: afterMilestone } =
parseMilestoneTarget(afterYolo);
@ -173,39 +181,11 @@ export async function handleAutonomousCommand(trimmed, ctx, pi) {
return true;
}
if (trimmed === "stop") {
if (!isAutoActive() && !isAutoPaused()) {
const result = stopAutoRemote(projectRoot());
if (result.found) {
ctx.ui.notify(
`Sent stop signal to autonomous mode session (PID ${result.pid}). It will shut down gracefully.`,
"info",
);
} else if (result.error) {
ctx.ui.notify(
`Failed to stop remote autonomous run: ${result.error}`,
"error",
);
} else {
ctx.ui.notify("Autonomous mode is not running.", "info");
}
return true;
}
await stopAuto(ctx, pi, "User requested stop");
await stopAutonomousRun(ctx, pi);
return true;
}
if (trimmed === "pause") {
if (!isAutoActive()) {
if (isAutoPaused()) {
ctx.ui.notify(
"Autonomous mode is already paused. /sf autonomous to resume.",
"info",
);
} else {
ctx.ui.notify("Autonomous mode is not running.", "info");
}
return true;
}
await pauseAuto(ctx, pi);
await pauseAutonomousRun(ctx, pi);
return true;
}
if (trimmed === "rate" || trimmed.startsWith("rate ")) {
@ -223,3 +203,39 @@ export async function handleAutonomousCommand(trimmed, ctx, pi) {
}
return false;
}
async function stopAutonomousRun(ctx, pi) {
if (!isAutoActive() && !isAutoPaused()) {
const result = stopAutoRemote(projectRoot());
if (result.found) {
ctx.ui.notify(
`Sent stop signal to autonomous mode session (PID ${result.pid}). It will shut down gracefully.`,
"info",
);
} else if (result.error) {
ctx.ui.notify(
`Failed to stop remote autonomous run: ${result.error}`,
"error",
);
} else {
ctx.ui.notify("Autonomous mode is not running.", "info");
}
return;
}
await stopAuto(ctx, pi, "User requested stop");
}
async function pauseAutonomousRun(ctx, pi) {
if (!isAutoActive()) {
if (isAutoPaused()) {
ctx.ui.notify(
"Autonomous mode is already paused. /autonomous to resume.",
"info",
);
} else {
ctx.ui.notify("Autonomous mode is not running.", "info");
}
return;
}
await pauseAuto(ctx, pi);
}

View file

@ -24,102 +24,102 @@ export function showHelp(ctx, args = "") {
const summaryLines = [
"SF — Singularity Forge\n",
"QUICK START",
" /sf start <tpl> Start a workflow template",
" /sf Run one assisted unit (same as /sf next)",
" /sf autonomous Run all queued product units continuously",
" /sf pause Pause autonomous mode",
" /sf stop Stop autonomous mode gracefully",
" /start <tpl> Start a workflow template",
" /next Run one assisted unit",
" /autonomous Run all queued product units continuously",
" /pause Pause autonomous mode",
" /autonomous stop Stop autonomous mode gracefully",
"",
"VISIBILITY",
` /sf status Dashboard (${formattedShortcutPair("dashboard")})`,
` /sf parallel watch Parallel monitor (${formattedShortcutPair("parallel")})`,
` /sf notifications Notification history (${formattedShortcutPair("notifications")})`,
" /sf visualize Interactive 10-tab TUI",
" /sf queue Show queued/dispatched units",
` /status Dashboard (${formattedShortcutPair("dashboard")})`,
` /parallel watch Parallel monitor (${formattedShortcutPair("parallel")})`,
` /notifications Notification history (${formattedShortcutPair("notifications")})`,
" /visualize Interactive 10-tab TUI",
" /queue Show queued/dispatched units",
"",
"COURSE CORRECTION",
" /sf steer <desc> Apply user override to active work",
" /sf capture <text> Quick-capture a thought to CAPTURES.md",
" /sf triage Classify and route pending captures",
" /sf undo Revert last completed unit [--force]",
" /sf rethink Conversational project reorganization",
" /steer <desc> Apply user override to active work",
" /capture <text> Quick-capture a thought to CAPTURES.md",
" /triage Classify and route pending captures",
" /undo Revert last completed unit [--force]",
" /rethink Conversational project reorganization",
"",
"SETUP",
" /sf init Project init wizard",
" /sf setup Global setup status [llm|search|remote|keys|prefs]",
" /sf reload Snapshot and reload agent with fresh extension code",
" /sf model Switch active session model",
" /sf prefs Manage preferences",
" /sf doctor Diagnose and repair .sf/ state",
" /init Project init wizard",
" /setup Global setup status [llm|search|remote|keys|prefs]",
" /reload Snapshot and reload agent with fresh extension code",
" /model Switch active session model",
" /prefs Manage preferences",
" /doctor Diagnose and repair .sf/ state",
"",
"Use /sf help all for the complete command reference.",
"Use /help all for the complete command reference.",
];
const allLines = [
"SF — Singularity Forge\n",
"WORKFLOW",
" /sf start <tpl> Start a workflow template (bugfix, spike, feature, hotfix, etc.)",
" /sf templates List available workflow templates [info <name>]",
" /sf Run one assisted unit (same as /sf next)",
" /sf next Assisted mode: execute next task, then pause [--dry-run] [--verbose]",
" /sf autonomous Run all queued product units continuously [--verbose]",
" /sf stop Stop autonomous mode gracefully",
" /sf pause Pause autonomous mode (preserves state, /sf autonomous to resume)",
" /sf discuss Start guided milestone/slice discussion",
" /sf new-milestone Create milestone from headless context (used by sf headless)",
" /start <tpl> Start a workflow template (bugfix, spike, feature, hotfix, etc.)",
" /templates List available workflow templates [info <name>]",
" /next Run one assisted unit",
" /next Assisted mode: execute next task, then pause [--dry-run] [--verbose]",
" /autonomous Run all queued product units continuously [--verbose]",
" /autonomous stop Stop autonomous mode gracefully",
" /pause Pause autonomous mode (preserves state, /autonomous to resume)",
" /discuss Start guided milestone/slice discussion",
" /new-milestone Create milestone from headless context (used by sf headless)",
"",
"VISIBILITY",
` /sf status Show progress dashboard (${formattedShortcutPair("dashboard")})`,
` /sf parallel watch Open parallel worker monitor (${formattedShortcutPair("parallel")})`,
" /sf visualize Interactive 10-tab TUI (progress, timeline, deps, metrics, health, agent, changes, knowledge, captures, export)",
" /sf queue Show queued/dispatched units and execution order",
" /sf history View execution history [--cost] [--phase] [--model] [N]",
" /sf changelog Show categorized release notes [version]",
` /sf notifications View persistent notification history [clear|tail|filter] (${formattedShortcutPair("notifications")})`,
` /status Show progress dashboard (${formattedShortcutPair("dashboard")})`,
` /parallel watch Open parallel worker monitor (${formattedShortcutPair("parallel")})`,
" /visualize Interactive 10-tab TUI (progress, timeline, deps, metrics, health, agent, changes, knowledge, captures, export)",
" /queue Show queued/dispatched units and execution order",
" /history View execution history [--cost] [--phase] [--model] [N]",
" /changelog Show categorized release notes [version]",
` /notifications View persistent notification history [clear|tail|filter] (${formattedShortcutPair("notifications")})`,
"",
"COURSE CORRECTION",
" /sf steer <desc> Apply user override to active work",
" /sf capture <text> Quick-capture a thought to CAPTURES.md",
" /sf triage Classify and route pending captures",
" /sf skip <unit> Prevent a unit from autonomous mode dispatch",
" /sf undo Revert last completed unit [--force]",
" /sf rethink Conversational project reorganization — reorder, park, discard, add milestones",
" /sf park [id] Park a milestone — skip without deleting [reason]",
" /sf unpark [id] Reactivate a parked milestone",
" /steer <desc> Apply user override to active work",
" /capture <text> Quick-capture a thought to CAPTURES.md",
" /triage Classify and route pending captures",
" /skip <unit> Prevent a unit from autonomous mode dispatch",
" /undo Revert last completed unit [--force]",
" /rethink Conversational project reorganization — reorder, park, discard, add milestones",
" /park [id] Park a milestone — skip without deleting [reason]",
" /unpark [id] Reactivate a parked milestone",
"",
"PROJECT KNOWLEDGE",
" /sf knowledge <type> <text> Add rule, pattern, or lesson to KNOWLEDGE.md",
" /sf codebase [generate|update|stats|indexer] Manage CODEBASE.md and Sift code search",
" /knowledge <type> <text> Add rule, pattern, or lesson to KNOWLEDGE.md",
" /codebase [generate|update|stats|indexer] Manage CODEBASE.md and Sift code search",
"",
"SCHEDULE",
" /sf schedule add --in <dur> <title> Schedule a follow-up item",
" /sf schedule list Show pending scheduled items",
" /sf schedule done <id> Mark an item complete",
" /schedule add --in <dur> <title> Schedule a follow-up item",
" /schedule list Show pending scheduled items",
" /schedule done <id> Mark an item complete",
"",
"SETUP & CONFIGURATION",
" /sf init Project init wizard — detect, configure, bootstrap .sf/",
" /sf setup Global setup status [llm|search|remote|keys|prefs]",
" /sf model Switch active session model [provider/model|model-id]",
" /sf mode Set workflow mode (solo/team) [global|project]",
" /sf prefs Manage preferences [global|project|status|wizard|setup|import-claude]",
" /sf cmux Manage cmux integration [status|on|off|notifications|sidebar|splits|browser]",
" /sf config Set API keys for external tools",
" /sf keys API key manager [list|add|remove|test|rotate|doctor]",
" /sf show-config Show effective configuration (models, routing, toggles)",
" /sf hooks Show post-unit hook configuration",
" /sf extensions Manage extensions [list|enable|disable|info]",
" /sf fast Toggle OpenAI service tier [on|off|flex|status]",
" /sf mcp External MCP server status [status|check <server>]",
" /init Project init wizard — detect, configure, bootstrap .sf/",
" /setup Global setup status [llm|search|remote|keys|prefs]",
" /model Switch active session model [provider/model|model-id]",
" /mode Set workflow mode (solo/team) [global|project]",
" /prefs Manage preferences [global|project|status|wizard|setup|import-claude]",
" /cmux Manage cmux integration [status|on|off|notifications|sidebar|splits|browser]",
" /config Set API keys for external tools",
" /keys API key manager [list|add|remove|test|rotate|doctor]",
" /show-config Show effective configuration (models, routing, toggles)",
" /hooks Show post-unit hook configuration",
" /extensions Manage extensions [list|enable|disable|info]",
" /fast Toggle OpenAI service tier [on|off|flex|status]",
" /mcp External MCP server status [status|check <server>]",
"",
"MAINTENANCE",
" /sf doctor Diagnose and repair .sf/ state [audit|fix|heal] [scope]",
" /sf reload Snapshot & reload agent, resume same session",
" /sf export Export milestone/slice results [--json|--markdown|--html] [--all]",
" /sf cleanup Remove merged branches or snapshots [branches|snapshots]",
" /sf worktree Manage worktrees from the TUI [list|merge|clean|remove]",
" /sf migrate Migrate .planning/ (v1) to .sf/ (v2) format",
" /sf remote Configure remote question delivery [slack|discord|status|disconnect]",
" /sf inspect Show SQLite DB diagnostics (schema, row counts, recent entries)",
" /sf update Update SF to the latest version via npm",
" /doctor Diagnose and repair .sf/ state [audit|fix|heal] [scope]",
" /reload Snapshot & reload agent, resume same session",
" /export Export milestone/slice results [--json|--markdown|--html] [--all]",
" /cleanup Remove merged branches or snapshots [branches|snapshots]",
" /worktree Manage worktrees from the TUI [list|merge|clean|remove]",
" /migrate Migrate .planning/ (v1) to .sf/ (v2) format",
" /remote Configure remote question delivery [slack|discord|status|disconnect]",
" /inspect Show SQLite DB diagnostics (schema, row counts, recent entries)",
" /update Update SF to the latest version via npm",
];
const showAll = args.trim().toLowerCase() === "all";
ctx.ui.notify((showAll ? allLines : summaryLines).join("\n"), "info");
@ -131,7 +131,7 @@ export async function handleStatus(ctx) {
await ensureDbOpen();
const state = await deriveState(basePath);
if (state.registry.length === 0) {
ctx.ui.notify("No SF milestones found. Run /sf to start.", "info");
ctx.ui.notify("No SF milestones found. Run /next to start.", "info");
return;
}
const { SFDashboardOverlay } = await import("../../dashboard-overlay.js");
@ -176,7 +176,7 @@ export async function handleVisualize(ctx) {
);
if (result === undefined) {
ctx.ui.notify(
"Visualizer requires an interactive terminal. Use /sf status for a text-based overview.",
"Visualizer requires an interactive terminal. Use /status for a text-based overview.",
"warning",
);
}
@ -204,7 +204,7 @@ export async function handleSetup(args, ctx) {
return;
}
if (args === "remote") {
ctx.ui.notify("Use /sf remote to configure remote questions.", "info");
ctx.ui.notify("Use /remote to configure remote questions.", "info");
return;
}
if (args === "keys") {
@ -220,11 +220,11 @@ export async function handleSetup(args, ctx) {
ctx.ui.notify(statusLines.join("\n"), "info");
ctx.ui.notify(
"Available setup commands:\n" +
" /sf setup llm — LLM authentication\n" +
" /sf setup search — Web search provider\n" +
" /sf setup remote — Remote questions (Discord/Slack/Telegram)\n" +
" /sf setup keys — Tool API keys\n" +
" /sf setup prefs — Global preferences wizard",
" /setup llm — LLM authentication\n" +
" /setup search — Web search provider\n" +
" /setup remote — Remote questions (Discord/Slack/Telegram)\n" +
" /setup keys — Tool API keys\n" +
" /setup prefs — Global preferences wizard",
"info",
);
}
@ -341,7 +341,7 @@ async function handleModel(trimmedArgs, ctx, pi) {
? `${ctx.model.provider}/${ctx.model.id}`
: "(none)";
ctx.ui.notify(
`Current model: ${current}\nUsage: /sf model <provider/model|model-id>`,
`Current model: ${current}\nUsage: /model <provider/model|model-id>`,
"info",
);
return;
@ -357,7 +357,7 @@ async function handleModel(trimmedArgs, ctx, pi) {
}
if (!targetModel) {
ctx.ui.notify(
`Model "${trimmed}" not found. Use /sf model with an exact provider/model or a unique model ID.`,
`Model "${trimmed}" not found. Use /model with an exact provider/model or a unique model ID.`,
"warning",
);
return;
@ -370,9 +370,9 @@ async function handleModel(trimmedArgs, ctx, pi) {
);
return;
}
// /sf model is an explicit per-session pin for SF dispatches.
// /model is an explicit per-session pin for SF dispatches.
// This is captured at auto bootstrap so it survives internal session
// switches during /sf autonomous and /sf next runs.
// switches during /autonomous and /next runs.
const sessionId = ctx.sessionManager?.getSessionId?.();
if (sessionId) {
setSessionModelOverride(sessionId, {

View file

@ -1,4 +1,4 @@
// SF Extension — /sf notifications Command Handler
// SF Extension — /notifications Command Handler
// View, filter, and clear the persistent notification history.
import { SFNotificationOverlay } from "../../notification-overlay.js";
import {
@ -37,7 +37,7 @@ function formatTimestamp(ts) {
}
}
export async function handleNotificationsCommand(args, ctx, _pi) {
// /sf notifications clear
// /notifications clear
if (args === "clear") {
clearNotifications();
// Suppress persistence so the confirmation toast doesn't re-populate the store
@ -49,7 +49,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
}
return true;
}
// /sf notifications tail [N]
// /notifications tail [N]
if (args === "tail" || args.startsWith("tail ")) {
const countStr = args.replace(/^tail\s*/, "").trim();
const count = countStr ? parseInt(countStr, 10) : 20;
@ -69,7 +69,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
);
const suffix =
all.length > entries.length
? `\n... and ${all.length - entries.length} more (open /sf notifications to browse all)`
? `\n... and ${all.length - entries.length} more (open /notifications to browse all)`
: "";
ctx.ui.notify(
`Last ${entries.length} notification(s):\n${lines.join("\n")}${suffix}`,
@ -77,7 +77,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
);
return true;
}
// /sf notifications filter <severity>
// /notifications filter <severity>
if (args.startsWith("filter ")) {
const severity = args
.replace(/^filter\s+/, "")
@ -85,7 +85,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
.toLowerCase();
if (!["error", "warning", "info", "success"].includes(severity)) {
ctx.ui.notify(
"Usage: /sf notifications filter <error|warning|info|success>",
"Usage: /notifications filter <error|warning|info|success>",
"warning",
);
return true;
@ -103,7 +103,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
);
const suffix =
entries.length > 20
? `\n... and ${entries.length - 20} more (open /sf notifications to browse all)`
? `\n... and ${entries.length - 20} more (open /notifications to browse all)`
: "";
ctx.ui.notify(
`${severity} notifications (${entries.length}):\n${lines.join("\n")}${suffix}`,
@ -111,7 +111,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
);
return true;
}
// /sf notifications (no args) — open overlay in TUI, or print summary
// /notifications (no args) — open overlay in TUI, or print summary
if (args === "" || args === "status") {
// Try overlay first (TUI mode)
if (ctx.hasUI) {
@ -157,7 +157,7 @@ export async function handleNotificationsCommand(args, ctx, _pi) {
}
// Unknown subcommand
ctx.ui.notify(
"Usage: /sf notifications [clear|tail [N]|filter <severity>]",
"Usage: /notifications [clear|tail [N]|filter <severity>]",
"warning",
);
return true;

View file

@ -126,7 +126,7 @@ export async function handleOpsCommand(trimmed, ctx, pi) {
}
if (trimmed === "skip") {
ctx.ui.notify(
"Usage: /sf skip <unit-id> Example: /sf skip M001/S01/T03",
"Usage: /skip <unit-id> Example: /skip M001/S01/T03",
"warning",
);
return true;
@ -235,7 +235,7 @@ export async function handleOpsCommand(trimmed, ctx, pi) {
}
if (trimmed === "run-hook") {
ctx.ui.notify(
`Usage: /sf run-hook <hook-name> <unit-type> <unit-id>
`Usage: /run-hook <hook-name> <unit-type> <unit-id>
Unit types:
execute-task - Task execution (unit-id: M001/S01/T01)
@ -245,8 +245,8 @@ Unit types:
complete-milestone - Milestone completion (unit-id: M001)
Examples:
/sf run-hook code-review execute-task M001/S01/T01
/sf run-hook lint-check plan-slice M001/S01`,
/run-hook code-review execute-task M001/S01/T01
/run-hook lint-check plan-slice M001/S01`,
"warning",
);
return true;
@ -257,7 +257,7 @@ Examples:
}
if (trimmed === "steer") {
ctx.ui.notify(
"Usage: /sf steer <description of change>. Example: /sf steer Use Postgres instead of SQLite",
"Usage: /steer <description of change>. Example: /steer Use Postgres instead of SQLite",
"warning",
);
return true;
@ -268,7 +268,7 @@ Examples:
}
if (trimmed === "knowledge") {
ctx.ui.notify(
"Usage: /sf knowledge <rule|pattern|lesson> <description>. Example: /sf knowledge rule Use real DB for integration tests",
"Usage: /knowledge <rule|pattern|lesson> <description>. Example: /knowledge rule Use real DB for integration tests",
"warning",
);
return true;
@ -302,7 +302,7 @@ Examples:
const phase = trimmed.replace(/^dispatch\s*/, "").trim();
if (!phase) {
ctx.ui.notify(
"Usage: /sf dispatch <phase> (research|plan|execute|complete|reassess|uat|replan)",
"Usage: /dispatch <phase> (research|plan|execute|complete|reassess|uat|replan)",
"warning",
);
return true;
@ -386,7 +386,7 @@ Examples:
}
if (trimmed === "scaffold") {
ctx.ui.notify(
"Usage: /sf scaffold sync [--dry-run] [--include-editing] [--only=<glob>]",
"Usage: /scaffold sync [--dry-run] [--include-editing] [--only=<glob>]",
"warning",
);
return true;
@ -431,7 +431,7 @@ Examples:
ctx,
);
if (handled) return true;
ctx.ui.notify("Usage: /sf plan promote|list|diff|specs ...", "info");
ctx.ui.notify("Usage: /plan promote|list|diff|specs ...", "info");
return true;
}
return false;

View file

@ -170,7 +170,7 @@ export async function handleParallelCommand(trimmed, _ctx, pi) {
}
emitParallelMessage(
pi,
`Unknown parallel subcommand "${subcommand}". Usage: /sf parallel [start [--stop-on-failure]|status|stop|pause|resume|merge|watch]`,
`Unknown parallel subcommand "${subcommand}". Usage: /parallel [start [--stop-on-failure]|status|stop|pause|resume|merge|watch]`,
);
return true;
}

View file

@ -35,7 +35,7 @@ import { projectRoot } from "../context.js";
// ─── Custom Workflow Subcommands ─────────────────────────────────────────
const WORKFLOW_USAGE = [
"Usage: /sf workflow <subcommand>",
"Usage: /workflow <subcommand>",
"",
" new — Create a new workflow definition (via skill)",
" run <name> [k=v] — Create a run and start autonomous mode",
@ -97,7 +97,7 @@ export function parseWorkflowRunArgs(args) {
return { defName, overrides };
}
async function handleCustomWorkflow(sub, ctx, pi) {
// Bare `/sf workflow` — show usage
// Bare `/workflow` — show usage
if (!sub) {
ctx.ui.notify(WORKFLOW_USAGE, "info");
return true;
@ -114,10 +114,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
if (sub === "run" || sub.startsWith("run ")) {
const args = sub.slice("run".length).trim();
if (!args) {
ctx.ui.notify(
"Usage: /sf workflow run <name> [param=value ...]",
"warning",
);
ctx.ui.notify("Usage: /workflow run <name> [param=value ...]", "warning");
return true;
}
const { defName, overrides } = parseWorkflowRunArgs(args);
@ -136,7 +133,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
);
startAutoDetached(ctx, pi, base, false);
} catch (err) {
// Clean up engine state so a failed workflow run doesn't pollute the next /sf autonomous
// Clean up engine state so a failed workflow run doesn't pollute the next /autonomous
setActiveEngineId(null);
setActiveRunDir(null);
const msg = err instanceof Error ? err.message : String(err);
@ -165,7 +162,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
if (sub === "validate" || sub.startsWith("validate ")) {
const defName = sub.slice("validate".length).trim();
if (!defName) {
ctx.ui.notify("Usage: /sf workflow validate <name>", "warning");
ctx.ui.notify("Usage: /workflow validate <name>", "warning");
return true;
}
const base = projectRoot();
@ -197,7 +194,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
const engineId = getActiveEngineId();
if (engineId === "dev" || engineId === null) {
ctx.ui.notify(
"No custom workflow is running. Use /sf pause for dev workflow.",
"No custom workflow is running. Use /pause for dev workflow.",
"warning",
);
return true;
@ -215,7 +212,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
const engineId = getActiveEngineId();
if (engineId === "dev" || engineId === null) {
ctx.ui.notify(
"No custom workflow to resume. Use /sf autonomous for dev workflow.",
"No custom workflow to resume. Use /autonomous for dev workflow.",
"warning",
);
return true;
@ -232,7 +229,7 @@ async function handleCustomWorkflow(sub, ctx, pi) {
return true;
}
export async function handleWorkflowCommand(trimmed, ctx, pi) {
// ── /sf do — natural language routing (must be early to route to other commands) ──
// ── /do — natural language routing (must be early to route to other commands) ──
if (trimmed === "do" || trimmed.startsWith("do ")) {
const { handleDo } = await import("../../commands-do.js");
await handleDo(trimmed.replace(/^do\s*/, "").trim(), ctx, pi);
@ -250,7 +247,7 @@ export async function handleWorkflowCommand(trimmed, ctx, pi) {
await handleSchedule(trimmed.replace(/^schedule\s*/, "").trim(), ctx);
return true;
}
// ── Custom workflow commands (`/sf workflow ...`) ──
// ── Custom workflow commands (`/workflow ...`) ──
if (trimmed === "workflow" || trimmed.startsWith("workflow ")) {
const sub = trimmed.slice("workflow".length).trim();
return handleCustomWorkflow(sub, ctx, pi);
@ -266,8 +263,8 @@ export async function handleWorkflowCommand(trimmed, ctx, pi) {
if (trimmed === "quick" || trimmed.startsWith("quick ")) {
if (isAutoActive()) {
ctx.ui.notify(
"/sf quick cannot run while autonomous mode is active.\n" +
"Stop autonomous mode first with /sf stop, then run /sf quick.",
"/quick cannot run while autonomous mode is active.\n" +
"Stop autonomous mode first with /stop, then run /quick.",
"error",
);
return true;
@ -318,7 +315,7 @@ export async function handleWorkflowCommand(trimmed, ctx, pi) {
}
if (isParked(basePath, targetId)) {
ctx.ui.notify(
`${targetId} is already parked. Use /sf unpark ${targetId} to reactivate.`,
`${targetId} is already parked. Use /unpark ${targetId} to reactivate.`,
"info",
);
return true;
@ -327,11 +324,11 @@ export async function handleWorkflowCommand(trimmed, ctx, pi) {
.replace(targetId, "")
.trim()
.replace(/^["']|["']$/g, "");
const reason = reasonParts || "Parked via /sf park";
const reason = reasonParts || "Parked via /park";
const success = parkMilestone(basePath, targetId, reason);
ctx.ui.notify(
success
? `Parked ${targetId}. Run /sf unpark ${targetId} to reactivate.`
? `Parked ${targetId}. Run /unpark ${targetId} to reactivate.`
: `Could not park ${targetId} — milestone not found.`,
success ? "info" : "warning",
);
@ -354,7 +351,7 @@ export async function handleWorkflowCommand(trimmed, ctx, pi) {
targetId = parkedEntries[0].id;
} else {
ctx.ui.notify(
`Parked milestones: ${parkedEntries.map((entry) => entry.id).join(", ")}. Specify which to unpark: /sf unpark <id>`,
`Parked milestones: ${parkedEntries.map((entry) => entry.id).join(", ")}. Specify which to unpark: /unpark <id>`,
"info",
);
return true;

View file

@ -1,24 +1,44 @@
import { importExtensionModule } from "@singularity-forge/pi-coding-agent";
import { getSfArgumentCompletions, SF_COMMAND_DESCRIPTION } from "./catalog.js";
export function registerSFCommand(pi) {
pi.registerCommand("sf", {
description: SF_COMMAND_DESCRIPTION,
getArgumentCompletions: getSfArgumentCompletions,
handler: async (args, ctx) => {
const { handleSFCommand } = await importExtensionModule(
import.meta.url,
"./dispatcher.js",
);
const { setStderrLoggingEnabled } = await importExtensionModule(
import.meta.url,
"../workflow-logger.js",
);
const previousStderrSetting = setStderrLoggingEnabled(false);
try {
await handleSFCommand(args, ctx, pi);
} finally {
setStderrLoggingEnabled(previousStderrSetting);
}
},
});
import {
DIRECT_SF_COMMANDS,
getSfTopLevelCommandCompletions,
SF_COMMAND_DESCRIPTION,
} from "./catalog.js";
async function dispatchDirectSFCommand(command, args, ctx, pi) {
const { handleSFCommand } = await importExtensionModule(
import.meta.url,
"./dispatcher.js",
);
const { setStderrLoggingEnabled } = await importExtensionModule(
import.meta.url,
"../workflow-logger.js",
);
const previousStderrSetting = setStderrLoggingEnabled(false);
try {
const suffix =
typeof args === "string" && args.trim().length > 0
? ` ${args.trim()}`
: "";
await handleSFCommand(`${command}${suffix}`, ctx, pi);
} finally {
setStderrLoggingEnabled(previousStderrSetting);
}
}
export function registerSFCommands(pi) {
for (const command of DIRECT_SF_COMMANDS) {
pi.registerCommand(command.cmd, {
description: command.desc || SF_COMMAND_DESCRIPTION,
getArgumentCompletions: (prefix) =>
getSfTopLevelCommandCompletions(command.cmd, prefix),
handler: async (args, ctx) => {
await dispatchDirectSFCommand(command.cmd, args, ctx, pi);
},
});
}
}
export function registerSFCommand(pi) {
registerSFCommands(pi);
}

View file

@ -4,7 +4,7 @@
* Read-only TUI overlay showing the effective SF configuration:
* token profile, model assignments, dynamic routing, git settings,
* budget, workflow toggles, and preference file sources.
* Opened via `/sf show-config` or `/sf config`.
* Opened via `/show-config` or `/config`.
*/
import { Key, matchesKey, truncateToWidth } from "@singularity-forge/pi-tui";
import {
@ -359,7 +359,7 @@ export class SFConfigOverlay {
allLines.push(
t.fg(
"muted",
" esc/q close \u2502 \u2191\u2193/jk scroll \u2502 /sf prefs to edit",
" esc/q close \u2502 \u2191\u2193/jk scroll \u2502 /prefs to edit",
),
);
// Apply scroll

View file

@ -90,21 +90,21 @@ export function formatCrashInfo(lock) {
];
// Add recovery guidance based on what was happening when it crashed
if (lock.unitType === "starting" && lock.unitId === "bootstrap") {
lines.push(`No work was lost. Run /sf autonomous to restart.`);
lines.push(`No work was lost. Run /autonomous to restart.`);
} else if (
lock.unitType.includes("research") ||
lock.unitType.includes("plan")
) {
lines.push(
`The ${lock.unitType} unit may be incomplete. Run /sf autonomous to re-run it.`,
`The ${lock.unitType} unit may be incomplete. Run /autonomous to re-run it.`,
);
} else if (lock.unitType.includes("execute")) {
lines.push(
`Task execution was interrupted. Run /sf autonomous to resume — completed work is preserved.`,
`Task execution was interrupted. Run /autonomous to resume — completed work is preserved.`,
);
} else if (lock.unitType.includes("complete")) {
lines.push(
`Slice/milestone completion was interrupted. Run /sf autonomous to finish.`,
`Slice/milestone completion was interrupted. Run /autonomous to finish.`,
);
}
return lines.join("\n");

View file

@ -4,7 +4,7 @@
* Full-screen overlay showing autonomous mode progress: milestone/slice/task
* breakdown, current unit, completed units, timing, and activity log.
* Toggled with Ctrl+Alt+G (G on macOS), Ctrl+Shift+G fallback,
* or opened from /sf status.
* or opened from /status.
*/
import {
Key,
@ -417,7 +417,7 @@ export class SFDashboardOverlay {
);
lines.push(blank());
} else if (this.dashData.paused) {
lines.push(row(th.fg("dim", "/sf autonomous to resume")));
lines.push(row(th.fg("dim", "/autonomous to resume")));
lines.push(blank());
} else if (isRemote) {
const rs = this.dashData.remoteSession;
@ -428,9 +428,7 @@ export class SFDashboardOverlay {
lines.push(row(th.fg("text", `Remote session: ${unitDisplay}`)));
lines.push(blank());
} else {
lines.push(
row(th.fg("dim", "No unit running · /sf autonomous to start")),
);
lines.push(row(th.fg("dim", "No unit running · /autonomous to start")));
lines.push(blank());
}
// Parallel workers section — shows active subagent sessions

View file

@ -11,14 +11,14 @@ SF can read Claude Code marketplace catalogs, inspect the plugins they reference
The interactive entry point is:
```text
/sf prefs import-claude
/prefs import-claude
```
You can also choose scope explicitly:
```text
/sf prefs import-claude global
/sf prefs import-claude project
/prefs import-claude global
/prefs import-claude project
```
---
@ -194,7 +194,7 @@ Real host validation included:
- clean startup of the installed `sf` binary after fixing stale bad settings
- successful invocation of an imported skill (`/stinkysnake`)
- successful execution of `/sf prefs import-claude global`
- successful execution of `/prefs import-claude global`
- verification that imported marketplace agent directories were **not** reintroduced into `settings.packages`
---

View file

@ -90,7 +90,7 @@ Setting `prefer_skills: []` does **not** disable skill discovery — it just mea
| `git.isolation` | `"worktree"` | `"worktree"` |
| `unique_milestone_ids` | `false` | `true` |
Quick setup: `/sf mode` (global) or `/sf mode project` (project-level).
Quick setup: `/mode` (global) or `/mode project` (project-level).
- `always_use_skills`: skills SF should use whenever they are relevant.
@ -126,7 +126,7 @@ Setting `prefer_skills: []` does **not** disable skill discovery — it just mea
- `idle_timeout_minutes`: minutes of inactivity before the supervisor intervenes (default: 10).
- `hard_timeout_minutes`: minutes before the supervisor forces termination (default: 30).
- `solver_max_iterations`: maximum autonomous solver iterations for one unit before pausing (default: `30000`, min: `1`, max: `100000`).
- `solver_eval_on_autonomous_exit`: automatically run and record the built-in solver eval when `/sf autonomous` exits (default: `true`; set `false` only to disable lifecycle eval evidence).
- `solver_eval_on_autonomous_exit`: automatically run and record the built-in solver eval when `/autonomous` exits (default: `true`; set `false` only to disable lifecycle eval evidence).
- `completion_nudge_after`: tool calls in a complete-slice unit before nudging the agent to call `sf_slice_complete` (default: 10; set `0` to disable).
- `runaway_guard_enabled`: enable active-loop diagnosis for long-running units (default: `true`).
- `runaway_tool_call_warning`: unit tool calls before a runaway warning (default: `60`; set `0` to disable this signal).
@ -177,8 +177,8 @@ Setting `prefer_skills: []` does **not** disable skill discovery — it just mea
- `exclude_patterns`: string[] — extra file or directory patterns to omit from CODEBASE.md.
- `max_files`: number — maximum files to include in CODEBASE.md. Default: `500`.
- `collapse_threshold`: number — files-per-directory threshold before collapsing a directory summary. Default: `20`.
- `indexer_backend`: `"sift"` or `"none"` — codebase-indexer backend used for prompt guidance and `/sf codebase indexer status`. Default: `"sift"`.
- `/sf codebase indexer status` reports Sift status. Install `rupurt/sift` on `PATH` or set `SIFT_PATH`.
- `indexer_backend`: `"sift"` or `"none"` — codebase-indexer backend used for prompt guidance and `/codebase indexer status`. Default: `"sift"`.
- `/codebase indexer status` reports Sift status. Install `rupurt/sift` on `PATH` or set `SIFT_PATH`.
- `remote_questions`: route interactive questions to Slack/Discord for machine-surface autonomous runs. Keys:
- `channel`: `"slack"` or `"discord"` — channel type.
@ -834,7 +834,7 @@ This team-mode configuration:
### Doctor Checks
Run `/sf doctor` to validate your config:
Run `/doctor` to validate your config:
- **Error:** `context_compact_at` > `context_hard_limit` (illogical; compact must happen before hitting hard limit).
- **Error:** Invalid `worktree_mode` value.
@ -844,7 +844,7 @@ Run `/sf doctor` to validate your config:
- **Warning:** Unrecognized phase name in `unit_timeout_by_phase` or `max_agents_by_phase`.
- **Warning:** Phase timeout < 60 seconds or agent count out of range [1, 16].
Run `/sf doctor --fix` to auto-correct fixable errors (e.g., `context_compact_at` > `context_hard_limit`).
Run `/doctor --fix` to auto-correct fixable errors (e.g., `context_compact_at` > `context_hard_limit`).
---
@ -964,10 +964,10 @@ OPENAI_API_KEY=vault://secret/openai/prod#api_key
### Troubleshooting
Run `/sf doctor` to check Vault setup:
Run `/doctor` to check Vault setup:
```bash
/sf doctor
/doctor
```
**Common Issues:**
@ -1080,7 +1080,7 @@ I recommend reassessing the problem statement or constraints.
### Doctor Check
Run `/sf doctor` to validate turn_status marker coverage:
Run `/doctor` to validate turn_status marker coverage:
- **Warning:** Executive prompts missing turn_status marker templates. Agents won't be able to signal `blocked` or `giving_up` state.

View file

@ -634,7 +634,7 @@ export function runEnvironmentChecks(basePath) {
}
/**
* Run environment checks with git remote check included.
* Use this for explicit /sf doctor invocations, not pre-dispatch gates.
* Use this for explicit /doctor invocations, not pre-dispatch gates.
*/
export function runFullEnvironmentChecks(basePath) {
const results = runEnvironmentChecks(basePath);
@ -644,7 +644,7 @@ export function runFullEnvironmentChecks(basePath) {
}
/**
* Run slow opt-in checks (build and/or test).
* These are never run on the pre-dispatch gate only on explicit /sf doctor --build/--test.
* These are never run on the pre-dispatch gate only on explicit /doctor --build/--test.
*/
export function runSlowEnvironmentChecks(basePath, options) {
const results = [];

View file

@ -99,7 +99,7 @@ export function formatDoctorIssuesForPrompt(issues) {
}
/**
* Serialize a doctor report to JSON suitable for CI/tooling integration.
* Usage: /sf doctor --json
* Usage: /doctor --json
*/
export function formatDoctorReportJson(report) {
return JSON.stringify(

View file

@ -57,7 +57,7 @@ export async function checkGlobalHealth(issues, fixesApplied, shouldFix) {
code: "orphaned_project_state",
scope: "project",
unitId: "global",
message: `${orphaned.length} orphaned SF project state director${orphaned.length === 1 ? "y" : "ies"} in ${projectsDir} whose git root no longer exists: ${labels}${overflow}${unknownNote}. Run /sf cleanup projects to audit or /sf cleanup projects --fix to reclaim disk space.`,
message: `${orphaned.length} orphaned SF project state director${orphaned.length === 1 ? "y" : "ies"} in ${projectsDir} whose git root no longer exists: ${labels}${overflow}${unknownNote}. Run /cleanup projects to audit or /cleanup projects --fix to reclaim disk space.`,
file: projectsDir,
fixable: true,
});

View file

@ -247,7 +247,7 @@ export async function preDispatchHealthGate(basePath) {
);
} catch {
issues.push(
`Corrupt git state: ${blockers.join(", ")}. Run /sf doctor fix.`,
`Corrupt git state: ${blockers.join(", ")}. Run /doctor fix.`,
);
}
}
@ -298,7 +298,7 @@ export async function preDispatchHealthGate(basePath) {
resolution.status === "missing"
) {
issues.push(
`${resolution.reason} Restore the branch or update the integration branch before dispatching. Run /sf doctor for details.`,
`${resolution.reason} Restore the branch or update the integration branch before dispatching. Run /doctor for details.`,
);
}
}
@ -367,7 +367,7 @@ export async function preDispatchHealthGate(basePath) {
if (issues.length > 0) {
return {
proceed: false,
reason: `Pre-dispatch health check failed:\n${issues.map((i) => ` - ${i}`).join("\n")}\nRun /sf doctor fix to resolve.`,
reason: `Pre-dispatch health check failed:\n${issues.map((i) => ` - ${i}`).join("\n")}\nRun /doctor fix to resolve.`,
issues,
fixesApplied,
};

View file

@ -258,8 +258,8 @@ function checkLlmProviders() {
providerId === "anthropic-vertex"
? "Set ANTHROPIC_VERTEX_PROJECT_ID and authenticate with Google ADC"
: info?.hasOAuth
? `Run /sf keys to authenticate`
: `Set ${envVar} or run /sf keys`,
? `Run /keys to authenticate`
: `Set ${envVar} or run /keys`,
required: true,
});
} else if (lookup.backedOff) {
@ -315,8 +315,8 @@ function checkRemoteQuestionsProvider() {
? `${label} — token not found (remote questions auto-resolve on timeout)`
: `${label} — channel configured but token not found`,
detail: info?.envVar
? `Set ${info.envVar} or run /sf keys`
: `Run /sf keys to configure`,
? `Set ${info.envVar} or run /keys`
: `Run /keys to configure`,
required: !autoResolvable,
};
}

View file

@ -639,7 +639,7 @@ export async function checkRuntimeHealth(
code: "metrics_ledger_bloat",
scope: "project",
unitId: "project",
message: `metrics.json has ${parsed.units.length} unit entries (${fileSizeMB}MB) — threshold is ${BLOAT_UNITS_THRESHOLD}. Run /sf doctor --fix to prune to the newest 1500 entries.`,
message: `metrics.json has ${parsed.units.length} unit entries (${fileSizeMB}MB) — threshold is ${BLOAT_UNITS_THRESHOLD}. Run /doctor --fix to prune to the newest 1500 entries.`,
file: ".sf/metrics.json",
fixable: true,
});
@ -709,18 +709,18 @@ export async function checkRuntimeHealth(
// Non-fatal — large file scan failed
}
// ── Snapshot ref bloat ────────────────────────────────────────────────
// refs/sf/snapshots/ accumulate over time. Prune to newest 5 per label
// refs/next/snapshots/ accumulate over time. Prune to newest 5 per label
// when total count exceeds threshold.
try {
if (nativeIsRepo(basePath)) {
const refs = nativeForEachRef(basePath, "refs/sf/snapshots/");
const refs = nativeForEachRef(basePath, "refs/next/snapshots/");
if (refs.length > 50) {
issues.push({
severity: "warning",
code: "snapshot_ref_bloat",
scope: "project",
unitId: "project",
message: `${refs.length} snapshot refs found under refs/sf/snapshots/ — pruning to newest 5 per label will reclaim git storage`,
message: `${refs.length} snapshot refs found under refs/next/snapshots/ — pruning to newest 5 per label will reclaim git storage`,
fixable: true,
});
if (shouldFix("snapshot_ref_bloat")) {
@ -804,7 +804,7 @@ function formatBucketCountParts(counts) {
*
* Returns `null` when there is nothing actionable (everything is current or
* intentionally customised). Otherwise returns a single warning summarising the
* bucket counts. The phrase "Run /sf scaffold sync" is forward-looking
* bucket counts. The phrase "Run /scaffold sync" is forward-looking
* Phase E adds the command. Phase C runs the silent path automatically on
* every SF startup, so the user does not need to act on most of these.
*/
@ -823,8 +823,8 @@ export function checkScaffoldFreshness(basePath) {
const summary = parts.join(", ");
const guidance =
pendingCount > 0
? `Run /sf scaffold sync to refresh ${pendingCount} pending docs`
: "Run /sf scaffold sync to inspect drift";
? `Run /scaffold sync to refresh ${pendingCount} pending docs`
: "Run /scaffold sync to inspect drift";
return {
severity: "warning",
code: "scaffold_drift",

Some files were not shown because too many files have changed in this diff Show more