refactor: update log prefixes and string values from gsd- to sf- namespace
Updates channel prefixes, log messages, comments, and configuration values across daemon, mcp-server, and related packages to complete the rebrand from gsd to sf-run naming. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
2e6b2b8fd1
commit
6b0ac484ba
671 changed files with 8295 additions and 5356 deletions
24
bin/sf-from-source
Executable file
24
bin/sf-from-source
Executable file
|
|
@ -0,0 +1,24 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# sf-from-source — run SF directly from this source checkout via bun.
|
||||
#
|
||||
# Purpose: every local commit in this repo (e.g. the #4251 fix) is live
|
||||
# immediately without reinstalling the bun-packaged sf-run. Subagents can
|
||||
# spawn sf by pointing SF_BIN_PATH at this script instead of dist/loader.js.
|
||||
#
|
||||
# Contract:
|
||||
# - Executable shim spawn() / exec() can launch directly.
|
||||
# - Exports SF_BIN_PATH before handing off to loader.ts so loader.ts's
|
||||
# `SF_BIN_PATH ||= process.argv[1]` branch preserves the shim path
|
||||
# instead of clobbering it with the .ts loader path (which is not
|
||||
# directly executable by child_process.spawn).
|
||||
#
|
||||
# Requirements: bun on PATH, node_modules populated (`bun install` once).
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR=$(cd -- "$(dirname -- "$(readlink -f "${BASH_SOURCE[0]}")")" &>/dev/null && pwd)
|
||||
SF_SOURCE_ROOT=$(cd -- "$SCRIPT_DIR/.." &>/dev/null && pwd)
|
||||
|
||||
export SF_BIN_PATH="$SCRIPT_DIR/sf-from-source"
|
||||
|
||||
exec bun run "$SF_SOURCE_ROOT/src/loader.ts" "$@"
|
||||
240
docs/dev/ADR-008-sf-tools-over-mcp-for-provider-parity.md
Normal file
240
docs/dev/ADR-008-sf-tools-over-mcp-for-provider-parity.md
Normal file
|
|
@ -0,0 +1,240 @@
|
|||
# ADR-008: Expose SF Workflow Tools Over MCP for Provider Parity
|
||||
|
||||
**Status:** Proposed
|
||||
**Date:** 2026-04-09
|
||||
**Deciders:** Jeremy McSpadden
|
||||
**Related:** ADR-004 (capability-aware model routing), ADR-007 (model catalog split and provider API encapsulation), `src/resources/extensions/sf/bootstrap/db-tools.ts`, `src/resources/extensions/claude-code-cli/stream-adapter.ts`, `packages/mcp-server/src/server.ts`
|
||||
|
||||
## Context
|
||||
|
||||
SF currently has two different tool surfaces:
|
||||
|
||||
1. **In-process extension tools** registered directly into the runtime via `pi.registerTool(...)`.
|
||||
2. **An external MCP server** that exposes session orchestration and read-only project inspection.
|
||||
|
||||
This split is now creating a real provider compatibility problem.
|
||||
|
||||
### What exists today
|
||||
|
||||
The core SF workflow tools are internal extension tools. Examples include:
|
||||
|
||||
- `gsd_summary_save`
|
||||
- `gsd_plan_milestone`
|
||||
- `gsd_plan_slice`
|
||||
- `gsd_plan_task`
|
||||
- `gsd_task_complete` / `gsd_complete_task`
|
||||
- `gsd_slice_complete`
|
||||
- `gsd_complete_milestone`
|
||||
- `gsd_validate_milestone`
|
||||
- `gsd_replan_slice`
|
||||
- `gsd_reassess_roadmap`
|
||||
|
||||
These are registered in `src/resources/extensions/sf/bootstrap/db-tools.ts` and related bootstrap files. SF prompts assume these tools are available during discuss, plan, and execute flows.
|
||||
|
||||
Separately, `packages/mcp-server/src/server.ts` exposes a different tool surface:
|
||||
|
||||
- session control: `gsd_execute`, `gsd_status`, `gsd_result`, `gsd_cancel`, `gsd_query`, `gsd_resolve_blocker`
|
||||
- read-only inspection: `gsd_progress`, `gsd_roadmap`, `gsd_history`, `gsd_doctor`, `gsd_captures`, `gsd_knowledge`
|
||||
|
||||
That MCP server is useful, but it is **not** a transport for the internal workflow/mutation tools.
|
||||
|
||||
### The current failure mode
|
||||
|
||||
The Claude Code CLI provider uses the Anthropic Agent SDK through `src/resources/extensions/claude-code-cli/stream-adapter.ts`. That adapter starts a Claude SDK session, but it does not forward the internal SF tool registry into the SDK session, nor does it attach a SF MCP server for those tools.
|
||||
|
||||
As a result:
|
||||
|
||||
- prompts tell the model to call tools like `gsd_complete_task`
|
||||
- the tools exist in SF
|
||||
- but Claude Code sessions do not actually receive those tools
|
||||
|
||||
This produces a contract mismatch: the model is required to use tools that are unavailable in that provider path.
|
||||
|
||||
### Why this matters
|
||||
|
||||
This is not a one-off Claude Code bug. It reveals a deeper architectural issue:
|
||||
|
||||
- SF’s core workflow contract is transport-specific
|
||||
- prompt authors assume “internal extension tool availability”
|
||||
- provider integrations do not all share the same execution surface
|
||||
|
||||
If SF wants provider parity, its workflow tools need a transport-neutral exposure model.
|
||||
|
||||
## Decision
|
||||
|
||||
**Expose the SF workflow tool contract over MCP as a first-class transport, and make MCP the compatibility layer for providers that cannot directly access the in-process SF tool registry.**
|
||||
|
||||
This means:
|
||||
|
||||
1. SF will keep its existing in-process tool registration for native runtime use.
|
||||
2. SF will add an MCP execution surface for the same workflow tools.
|
||||
3. Both surfaces must call the same underlying business logic.
|
||||
4. Provider integrations such as Claude Code will use the MCP surface when they cannot access native in-process tools directly.
|
||||
|
||||
The decision is explicitly **not** to replace the native tool system with MCP everywhere. MCP is the parity and portability layer, not the only runtime path.
|
||||
|
||||
## Decision Details
|
||||
|
||||
### 1. One handler layer, multiple transports
|
||||
|
||||
SF tool behavior must not be implemented twice.
|
||||
|
||||
The transport-neutral business logic for workflow tools should be shared by:
|
||||
|
||||
- native extension tool registration (`pi.registerTool(...)`)
|
||||
- MCP server tool registration
|
||||
|
||||
The MCP server should wrap the same handlers used by `db-tools.ts`, `query-tools.ts`, and related modules. This avoids logic drift and keeps validation, DB writes, file rendering, and recovery behavior consistent.
|
||||
|
||||
### 2. Add a workflow-tool MCP surface
|
||||
|
||||
SF will expose the workflow tools required for discuss, planning, execution, and completion over MCP.
|
||||
|
||||
Initial minimum set:
|
||||
|
||||
- `gsd_summary_save`
|
||||
- `gsd_decision_save`
|
||||
- `gsd_plan_milestone`
|
||||
- `gsd_plan_slice`
|
||||
- `gsd_plan_task`
|
||||
- `gsd_task_complete`
|
||||
- `gsd_slice_complete`
|
||||
- `gsd_complete_milestone`
|
||||
- `gsd_validate_milestone`
|
||||
- `gsd_replan_slice`
|
||||
- `gsd_reassess_roadmap`
|
||||
- `gsd_save_gate_result`
|
||||
- selected read/query tools such as `gsd_milestone_status`
|
||||
|
||||
Aliases should be treated conservatively. MCP should prefer canonical names unless compatibility requires exposing aliases.
|
||||
|
||||
### 3. Preserve safety semantics
|
||||
|
||||
The current SF safety model includes write gates, discussion gates, queue-mode restrictions, and state integrity guarantees.
|
||||
|
||||
Those guarantees must continue to apply when tools are invoked over MCP. In particular:
|
||||
|
||||
- MCP must not create a path that bypasses write gating
|
||||
- MCP mutations must preserve the same DB/file/state invariants as native tools
|
||||
- provider-specific fallback behavior must not allow manual summary writing in place of canonical completion tools
|
||||
|
||||
### 4. Make provider capability checks explicit
|
||||
|
||||
Before dispatching a workflow that requires SF workflow tools, SF should check whether the selected provider/session can access the required tool surface.
|
||||
|
||||
If a provider cannot access either:
|
||||
|
||||
- native in-process SF tools, or
|
||||
- the SF MCP workflow tool surface
|
||||
|
||||
then SF must fail early with a clear compatibility error rather than allowing execution to continue in a degraded, state-breaking mode.
|
||||
|
||||
### 5. Keep the existing session/read MCP server
|
||||
|
||||
The existing MCP server in `packages/mcp-server` remains valid. It serves a different purpose:
|
||||
|
||||
- remote session orchestration
|
||||
- status/result polling
|
||||
- filesystem-backed project inspection
|
||||
|
||||
The new workflow-tool MCP surface is complementary, not a replacement.
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Alternative A: Reroute away from Claude Code whenever tool-backed execution is needed
|
||||
|
||||
This would fix the immediate failure for multi-provider users, but it does not solve provider parity. It also fails completely for users who only have Claude Code configured.
|
||||
|
||||
**Rejected** because it treats the symptom, not the architectural gap.
|
||||
|
||||
### Alternative B: Hard-fail Claude Code and require another provider
|
||||
|
||||
This is a valid short-term guardrail and may still be used before MCP support is complete.
|
||||
|
||||
**Rejected as the long-term architecture** because it permanently excludes a supported provider from first-class SF execution.
|
||||
|
||||
### Alternative C: Inject the internal SF tool registry directly into the Claude Agent SDK without MCP
|
||||
|
||||
This would tightly couple SF’s internal extension runtime to a provider-specific integration path. It would not generalize well to other providers or external tool clients.
|
||||
|
||||
**Rejected** because it creates a provider-specific bridge instead of a transport-neutral contract.
|
||||
|
||||
### Alternative D: Replace native SF tools entirely with MCP
|
||||
|
||||
This would simplify the conceptual model, but it would force all runtimes through an external protocol boundary even when the native in-process path is faster and already works well.
|
||||
|
||||
**Rejected** because MCP is needed for portability, not because the native tool system is flawed.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
1. **Provider parity improves.** Providers that can consume MCP tools can participate in full SF workflow execution.
|
||||
2. **The workflow contract becomes transport-neutral.** Prompts can rely on capabilities rather than a specific runtime implementation detail.
|
||||
3. **One compatibility story for external clients.** Claude Code, Cursor, and other MCP-capable clients can use the same workflow tool surface.
|
||||
4. **Better long-term architecture.** Internal tools and external transports converge on shared handlers instead of diverging implementations.
|
||||
|
||||
### Negative
|
||||
|
||||
1. **Larger surface area to secure and test.** Mutation tools over MCP are higher risk than read-only inspection tools.
|
||||
2. **Migration complexity.** Tool registration, gating, and handler extraction must be refactored carefully.
|
||||
3. **Two transport paths must remain aligned.** Native and MCP invocation semantics must stay behaviorally identical.
|
||||
|
||||
### Neutral / Tradeoff
|
||||
|
||||
The system will now support:
|
||||
|
||||
- native in-process tool execution when available
|
||||
- MCP-backed tool execution when native access is unavailable
|
||||
|
||||
That is more complex than a single-path system, but it is the cost of provider portability without sacrificing native runtime quality.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
### Phase 1: Extract shared handlers
|
||||
|
||||
Refactor workflow tools so MCP and native registration can call the same transport-neutral functions.
|
||||
|
||||
Priority targets:
|
||||
|
||||
- `gsd_summary_save`
|
||||
- `gsd_task_complete`
|
||||
- `gsd_plan_milestone`
|
||||
- `gsd_plan_slice`
|
||||
- `gsd_plan_task`
|
||||
|
||||
### Phase 2: Stand up the workflow-tool MCP server
|
||||
|
||||
Add a new MCP surface for workflow tool execution. This may extend the existing MCP package or live as a sibling package, but it must be clearly separated from the current session/read API.
|
||||
|
||||
### Phase 3: Port safety enforcement
|
||||
|
||||
Move or centralize write gates and related policy checks so MCP mutations cannot bypass the existing safety model.
|
||||
|
||||
### Phase 4: Attach MCP workflow tools to Claude Code sessions
|
||||
|
||||
Update the Claude Code provider integration to pass a SF-managed `mcpServers` configuration into the Claude Agent SDK session when required.
|
||||
|
||||
### Phase 5: Add provider capability gating
|
||||
|
||||
Before tool-dependent flows begin, verify that the active provider can access the required SF workflow tools via either native registration or MCP.
|
||||
|
||||
### Phase 6: Update prompts and docs
|
||||
|
||||
Prompt contracts should remain strict about using canonical SF completion/planning tools, but documentation and runtime messaging must no longer assume that only native in-process tool registration satisfies that contract.
|
||||
|
||||
## Validation
|
||||
|
||||
Success is defined by all of the following:
|
||||
|
||||
1. A Claude Code-backed execution session can complete a task using canonical SF workflow tools without manual summary writing.
|
||||
2. Native provider behavior remains unchanged.
|
||||
3. MCP-invoked workflow tools produce the same DB updates, rendered artifacts, and state transitions as native tool calls.
|
||||
4. Write-gate and discussion-gate protections still hold under MCP invocation.
|
||||
5. When required capabilities are unavailable, SF fails early with a precise compatibility error.
|
||||
|
||||
## Scope Notes
|
||||
|
||||
This ADR establishes the architectural direction. It does **not** require full MCP exposure of every historical alias or every auxiliary tool in the first implementation.
|
||||
|
||||
The first implementation should prioritize the minimum workflow tool set needed to make discuss/plan/execute/complete flows work safely for MCP-capable providers.
|
||||
|
|
@ -21,7 +21,7 @@ import type { Logger } from './logger.js';
|
|||
|
||||
const DEFAULT_CATEGORY_NAME = 'SF Projects';
|
||||
const ARCHIVE_CATEGORY_NAME = 'SF Archive';
|
||||
const CHANNEL_PREFIX = 'gsd-';
|
||||
const CHANNEL_PREFIX = 'sf-';
|
||||
const MAX_CHANNEL_NAME_LENGTH = 100; // Discord's limit
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -36,10 +36,10 @@ const MAX_CHANNEL_NAME_LENGTH = 100; // Discord's limit
|
|||
* - Replaces non-alphanumeric (except hyphens) with hyphens
|
||||
* - Collapses consecutive hyphens
|
||||
* - Trims leading/trailing hyphens
|
||||
* - Prefixes with 'gsd-'
|
||||
* - Prefixes with 'sf-'
|
||||
* - Caps total length at 100 chars (Discord limit)
|
||||
*
|
||||
* Returns 'gsd-unnamed' for empty/whitespace-only inputs.
|
||||
* Returns 'sf-unnamed' for empty/whitespace-only inputs.
|
||||
*/
|
||||
export function sanitizeChannelName(projectDir: string): string {
|
||||
// Extract basename — handle both forward and back slashes
|
||||
|
|
@ -51,7 +51,7 @@ export function sanitizeChannelName(projectDir: string): string {
|
|||
|
||||
// Fallback for empty basename
|
||||
if (!basename) {
|
||||
return 'gsd-unnamed';
|
||||
return 'sf-unnamed';
|
||||
}
|
||||
|
||||
// Lowercase
|
||||
|
|
@ -68,7 +68,7 @@ export function sanitizeChannelName(projectDir: string): string {
|
|||
|
||||
// Fallback if nothing remains after sanitization
|
||||
if (!name) {
|
||||
return 'gsd-unnamed';
|
||||
return 'sf-unnamed';
|
||||
}
|
||||
|
||||
// Prefix
|
||||
|
|
|
|||
|
|
@ -7,10 +7,10 @@ import { Logger } from './logger.js';
|
|||
import { Daemon } from './daemon.js';
|
||||
import { install, uninstall, status } from './launchd.js';
|
||||
|
||||
const USAGE = `Usage: gsd-daemon [options]
|
||||
const USAGE = `Usage: sf-daemon [options]
|
||||
|
||||
Options:
|
||||
--config <path> Path to YAML config file (default: ~/.gsd/daemon.yaml)
|
||||
--config <path> Path to YAML config file (default: ~/.sf/daemon.yaml)
|
||||
--verbose Print log entries to stderr in addition to the log file
|
||||
--install Install the launchd LaunchAgent (auto-starts on login)
|
||||
--uninstall Uninstall the launchd LaunchAgent
|
||||
|
|
@ -48,27 +48,27 @@ async function main(): Promise<void> {
|
|||
scriptPath,
|
||||
configPath,
|
||||
});
|
||||
process.stdout.write('gsd-daemon: launchd agent installed and loaded.\n');
|
||||
process.stdout.write('sf-daemon: launchd agent installed and loaded.\n');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
if (values.uninstall) {
|
||||
uninstall();
|
||||
process.stdout.write('gsd-daemon: launchd agent uninstalled.\n');
|
||||
process.stdout.write('sf-daemon: launchd agent uninstalled.\n');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
if (values.status) {
|
||||
const result = status();
|
||||
if (!result.registered) {
|
||||
process.stdout.write('gsd-daemon: not registered with launchd.\n');
|
||||
process.stdout.write('sf-daemon: not registered with launchd.\n');
|
||||
} else if (result.pid != null) {
|
||||
process.stdout.write(
|
||||
`gsd-daemon: running (PID ${result.pid}, last exit status: ${result.lastExitStatus ?? 'n/a'})\n`,
|
||||
`sf-daemon: running (PID ${result.pid}, last exit status: ${result.lastExitStatus ?? 'n/a'})\n`,
|
||||
);
|
||||
} else {
|
||||
process.stdout.write(
|
||||
`gsd-daemon: registered but not running (last exit status: ${result.lastExitStatus ?? 'n/a'})\n`,
|
||||
`sf-daemon: registered but not running (last exit status: ${result.lastExitStatus ?? 'n/a'})\n`,
|
||||
);
|
||||
}
|
||||
process.exit(0);
|
||||
|
|
@ -91,6 +91,6 @@ async function main(): Promise<void> {
|
|||
|
||||
main().catch((err: unknown) => {
|
||||
const msg = err instanceof Error ? err.message : String(err);
|
||||
process.stderr.write(`gsd-daemon: fatal: ${msg}\n`);
|
||||
process.stderr.write(`sf-daemon: fatal: ${msg}\n`);
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ export async function registerGuildCommands(
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Format session list for /gsd-status reply.
|
||||
* Format session list for /sf-status reply.
|
||||
* Shows projectName, status, duration, and cost for each session.
|
||||
* Returns 'No active sessions.' if the array is empty.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ function defaults(): DaemonConfig {
|
|||
discord: undefined,
|
||||
projects: { scan_roots: [] },
|
||||
log: {
|
||||
file: resolve(homedir(), '.gsd', 'daemon.log'),
|
||||
file: resolve(homedir(), '.sf', 'daemon.log'),
|
||||
level: 'info',
|
||||
max_size_mb: 50,
|
||||
},
|
||||
|
|
@ -29,13 +29,13 @@ function defaults(): DaemonConfig {
|
|||
|
||||
/**
|
||||
* Resolve the config file path.
|
||||
* Priority: explicit CLI arg → SF_DAEMON_CONFIG env → ~/.gsd/daemon.yaml
|
||||
* Priority: explicit CLI arg → SF_DAEMON_CONFIG env → ~/.sf/daemon.yaml
|
||||
*/
|
||||
export function resolveConfigPath(cliPath?: string): string {
|
||||
if (cliPath) return expandTilde(cliPath);
|
||||
const envPath = process.env['SF_DAEMON_CONFIG'];
|
||||
if (envPath) return expandTilde(envPath);
|
||||
return resolve(homedir(), '.gsd', 'daemon.yaml');
|
||||
return resolve(homedir(), '.sf', 'daemon.yaml');
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -53,12 +53,12 @@ describe('resolveConfigPath', () => {
|
|||
}
|
||||
});
|
||||
|
||||
it('defaults to ~/.gsd/daemon.yaml', () => {
|
||||
it('defaults to ~/.sf/daemon.yaml', () => {
|
||||
const prev = process.env['SF_DAEMON_CONFIG'];
|
||||
try {
|
||||
delete process.env['SF_DAEMON_CONFIG'];
|
||||
const p = resolveConfigPath();
|
||||
assert.equal(p, join(homedir(), '.gsd', 'daemon.yaml'));
|
||||
assert.equal(p, join(homedir(), '.sf', 'daemon.yaml'));
|
||||
} finally {
|
||||
if (prev !== undefined) process.env['SF_DAEMON_CONFIG'] = prev;
|
||||
}
|
||||
|
|
@ -529,7 +529,7 @@ describe('CLI integration', () => {
|
|||
[join(__dirname, 'cli.js'), '--help'],
|
||||
{ encoding: 'utf-8', timeout: 5000 },
|
||||
);
|
||||
assert.ok(result.includes('Usage: gsd-daemon'));
|
||||
assert.ok(result.includes('Usage: sf-daemon'));
|
||||
assert.ok(result.includes('--config'));
|
||||
assert.ok(result.includes('--verbose'));
|
||||
});
|
||||
|
|
|
|||
|
|
@ -230,52 +230,52 @@ describe('Daemon + DiscordBot wiring', () => {
|
|||
// ---------- sanitizeChannelName ----------
|
||||
|
||||
describe('sanitizeChannelName', () => {
|
||||
it('converts basic path to gsd-prefixed name', () => {
|
||||
assert.equal(sanitizeChannelName('/home/user/my-project'), 'gsd-my-project');
|
||||
it('converts basic path to sf-prefixed name', () => {
|
||||
assert.equal(sanitizeChannelName('/home/user/my-project'), 'sf-my-project');
|
||||
});
|
||||
|
||||
it('converts path with special characters to hyphens', () => {
|
||||
assert.equal(sanitizeChannelName('/home/user/My_Cool.Project!v2'), 'gsd-my-cool-project-v2');
|
||||
assert.equal(sanitizeChannelName('/home/user/My_Cool.Project!v2'), 'sf-my-cool-project-v2');
|
||||
});
|
||||
|
||||
it('truncates very long names to 100 chars', () => {
|
||||
const longName = 'a'.repeat(200);
|
||||
const result = sanitizeChannelName(`/home/${longName}`);
|
||||
assert.ok(result.length <= 100, `Expected <= 100 chars, got ${result.length}`);
|
||||
assert.ok(result.startsWith('gsd-'));
|
||||
assert.ok(result.startsWith('sf-'));
|
||||
});
|
||||
|
||||
it('cleans leading/trailing dots and underscores', () => {
|
||||
assert.equal(sanitizeChannelName('/home/...___project___...'), 'gsd-project');
|
||||
assert.equal(sanitizeChannelName('/home/...___project___...'), 'sf-project');
|
||||
});
|
||||
|
||||
it('returns gsd-unnamed for empty basename', () => {
|
||||
assert.equal(sanitizeChannelName(''), 'gsd-unnamed');
|
||||
assert.equal(sanitizeChannelName('/'), 'gsd-unnamed');
|
||||
it('returns sf-unnamed for empty basename', () => {
|
||||
assert.equal(sanitizeChannelName(''), 'sf-unnamed');
|
||||
assert.equal(sanitizeChannelName('/'), 'sf-unnamed');
|
||||
});
|
||||
|
||||
it('returns gsd-unnamed for basename with only special chars', () => {
|
||||
assert.equal(sanitizeChannelName('/home/!!!'), 'gsd-unnamed');
|
||||
it('returns sf-unnamed for basename with only special chars', () => {
|
||||
assert.equal(sanitizeChannelName('/home/!!!'), 'sf-unnamed');
|
||||
});
|
||||
|
||||
it('collapses consecutive hyphens', () => {
|
||||
assert.equal(sanitizeChannelName('/home/a---b---c'), 'gsd-a-b-c');
|
||||
assert.equal(sanitizeChannelName('/home/a---b---c'), 'sf-a-b-c');
|
||||
});
|
||||
|
||||
it('handles Windows-style backslash paths', () => {
|
||||
assert.equal(sanitizeChannelName('C:\\Users\\lex\\my-project'), 'gsd-my-project');
|
||||
assert.equal(sanitizeChannelName('C:\\Users\\lex\\my-project'), 'sf-my-project');
|
||||
});
|
||||
|
||||
it('handles name at exact prefix + 96 chars = 100 char limit', () => {
|
||||
// gsd- is 4 chars, so a 96-char basename should produce exactly 100
|
||||
// sf- is 4 chars, so a 96-char basename should produce exactly 100
|
||||
const name96 = 'a'.repeat(96);
|
||||
const result = sanitizeChannelName(`/home/${name96}`);
|
||||
assert.equal(result.length, 100);
|
||||
assert.equal(result, `gsd-${'a'.repeat(96)}`);
|
||||
assert.equal(result, `sf-${'a'.repeat(96)}`);
|
||||
});
|
||||
|
||||
it('handles whitespace-only basename', () => {
|
||||
assert.equal(sanitizeChannelName('/home/ '), 'gsd-unnamed');
|
||||
assert.equal(sanitizeChannelName('/home/ '), 'sf-unnamed');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -383,7 +383,7 @@ describe('ChannelManager', () => {
|
|||
const mgr = new ChannelManager({ guild: guild as any, logger: logger as any });
|
||||
|
||||
const channel = await mgr.createProjectChannel('/home/user/my-project');
|
||||
assert.equal(channel.name, 'gsd-my-project');
|
||||
assert.equal(channel.name, 'sf-my-project');
|
||||
assert.equal(channel.type, ChannelType.GuildText);
|
||||
// Category was created first (chan-1), then channel (chan-2)
|
||||
assert.equal(channel.parentId, 'chan-1');
|
||||
|
|
@ -443,10 +443,10 @@ describe('buildCommands', () => {
|
|||
const commands = buildCommands();
|
||||
assert.equal(commands.length, 4);
|
||||
const names = commands.map((c) => c.name);
|
||||
assert.ok(names.includes('gsd-status'), 'should include gsd-status');
|
||||
assert.ok(names.includes('gsd-start'), 'should include gsd-start');
|
||||
assert.ok(names.includes('gsd-stop'), 'should include gsd-stop');
|
||||
assert.ok(names.includes('gsd-verbose'), 'should include gsd-verbose');
|
||||
assert.ok(names.includes('sf-status'), 'should include sf-status');
|
||||
assert.ok(names.includes('sf-start'), 'should include sf-start');
|
||||
assert.ok(names.includes('sf-stop'), 'should include sf-stop');
|
||||
assert.ok(names.includes('sf-verbose'), 'should include sf-verbose');
|
||||
});
|
||||
|
||||
it('each command has a description', () => {
|
||||
|
|
@ -551,8 +551,8 @@ describe('command dispatch', () => {
|
|||
// The command routing logic is tested indirectly through integration of the
|
||||
// pure helpers (buildCommands, formatSessionStatus, isAuthorized).
|
||||
|
||||
it('gsd-status with no sessions produces empty message', () => {
|
||||
// Tests the formatSessionStatus path that /gsd-status calls
|
||||
it('sf-status with no sessions produces empty message', () => {
|
||||
// Tests the formatSessionStatus path that /sf-status calls
|
||||
const result = formatSessionStatus([]);
|
||||
assert.equal(result, 'No active sessions.');
|
||||
});
|
||||
|
|
@ -560,7 +560,7 @@ describe('command dispatch', () => {
|
|||
it('unknown command name is not in buildCommands list', () => {
|
||||
const commands = buildCommands();
|
||||
const names = commands.map((c) => c.name);
|
||||
assert.ok(!names.includes('gsd-unknown'), 'unknown should not be in command list');
|
||||
assert.ok(!names.includes('sf-unknown'), 'unknown should not be in command list');
|
||||
});
|
||||
|
||||
it('auth guard rejects non-owner on interaction', () => {
|
||||
|
|
@ -733,14 +733,14 @@ describe('Daemon orchestrator wiring', () => {
|
|||
});
|
||||
});
|
||||
|
||||
// ---------- /gsd-start and /gsd-stop logic paths ----------
|
||||
// ---------- /sf-start and /sf-stop logic paths ----------
|
||||
|
||||
describe('/gsd-start and /gsd-stop logic', () => {
|
||||
describe('/sf-start and /sf-stop logic', () => {
|
||||
// These test the observable logic paths exercised by the handlers.
|
||||
// Since handleGsdStart/handleGsdStop are private, we test the data layer
|
||||
// they depend on — project scanning, session listing, and edge cases.
|
||||
|
||||
it('/gsd-start: scanForProjects returning 0 projects', async () => {
|
||||
it('/sf-start: scanForProjects returning 0 projects', async () => {
|
||||
// Simulates the "no projects" path
|
||||
const { scanForProjects } = await import('./project-scanner.js');
|
||||
// With no scan roots, should return empty
|
||||
|
|
@ -748,7 +748,7 @@ describe('/gsd-start and /gsd-stop logic', () => {
|
|||
assert.equal(projects.length, 0);
|
||||
});
|
||||
|
||||
it('/gsd-stop: getAllSessions returns empty when no sessions active', async () => {
|
||||
it('/sf-stop: getAllSessions returns empty when no sessions active', async () => {
|
||||
const { SessionManager } = await import('./session-manager.js');
|
||||
const dir = tmpDir();
|
||||
cleanupDirs.push(dir);
|
||||
|
|
@ -760,7 +760,7 @@ describe('/gsd-start and /gsd-stop logic', () => {
|
|||
await logger.close();
|
||||
});
|
||||
|
||||
it('/gsd-stop: filters to active sessions only', () => {
|
||||
it('/sf-stop: filters to active sessions only', () => {
|
||||
// Simulate the filter logic used in handleGsdStop
|
||||
const allSessions: Partial<ManagedSession>[] = [
|
||||
{ sessionId: 's1', status: 'running', projectName: 'alpha' },
|
||||
|
|
@ -777,7 +777,7 @@ describe('/gsd-start and /gsd-stop logic', () => {
|
|||
assert.deepEqual(active.map((s) => s.projectName), ['alpha', 'gamma', 'epsilon']);
|
||||
});
|
||||
|
||||
it('/gsd-start: >25 projects are truncated for select menu', () => {
|
||||
it('/sf-start: >25 projects are truncated for select menu', () => {
|
||||
// Simulate the truncation logic
|
||||
const projects = Array.from({ length: 30 }, (_, i) => ({
|
||||
name: `project-${i}`,
|
||||
|
|
|
|||
|
|
@ -256,7 +256,7 @@ export class DiscordBot {
|
|||
}
|
||||
|
||||
/**
|
||||
* Set the EventBridge reference so the bot can dispatch /gsd-verbose commands.
|
||||
* Set the EventBridge reference so the bot can dispatch /sf-verbose commands.
|
||||
* Called by Daemon after creating the EventBridge.
|
||||
*/
|
||||
setEventBridge(bridge: EventBridge): void {
|
||||
|
|
@ -286,34 +286,34 @@ export class DiscordBot {
|
|||
this.logger.info('command handled', { commandName, userId: interaction.user.id });
|
||||
|
||||
switch (commandName) {
|
||||
case 'gsd-status': {
|
||||
case 'sf-status': {
|
||||
const sessions = this.sessionManager.getAllSessions();
|
||||
const content = formatSessionStatus(sessions);
|
||||
interaction.reply({ content, ephemeral: true }).catch((err) => {
|
||||
this.logger.warn('gsd-status reply failed', {
|
||||
this.logger.warn('sf-status reply failed', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
});
|
||||
break;
|
||||
}
|
||||
case 'gsd-start':
|
||||
case 'sf-start':
|
||||
this.handleGsdStart(interaction).catch((err) => {
|
||||
this.logger.warn('gsd-start handler error', {
|
||||
this.logger.warn('sf-start handler error', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
});
|
||||
break;
|
||||
case 'gsd-stop':
|
||||
case 'sf-stop':
|
||||
this.handleGsdStop(interaction).catch((err) => {
|
||||
this.logger.warn('gsd-stop handler error', {
|
||||
this.logger.warn('sf-stop handler error', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
});
|
||||
break;
|
||||
case 'gsd-verbose': {
|
||||
case 'sf-verbose': {
|
||||
if (!this.eventBridge) {
|
||||
interaction.reply({ content: 'Event bridge not available.', ephemeral: true }).catch((err) => {
|
||||
this.logger.warn('gsd-verbose reply failed', {
|
||||
this.logger.warn('sf-verbose reply failed', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
});
|
||||
|
|
@ -323,7 +323,7 @@ export class DiscordBot {
|
|||
const channelId = interaction.channelId;
|
||||
this.eventBridge.getVerbosityManager().setLevel(channelId, level);
|
||||
interaction.reply({ content: `Verbosity set to **${level}** for this channel.`, ephemeral: true }).catch((err) => {
|
||||
this.logger.warn('gsd-verbose reply failed', {
|
||||
this.logger.warn('sf-verbose reply failed', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
});
|
||||
|
|
@ -340,12 +340,12 @@ export class DiscordBot {
|
|||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Private: /gsd-start handler
|
||||
// Private: /sf-start handler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private async handleGsdStart(interaction: import('discord.js').ChatInputCommandInteraction): Promise<void> {
|
||||
await interaction.deferReply({ ephemeral: true });
|
||||
this.logger.info('gsd-start: scanning projects');
|
||||
this.logger.info('sf-start: scanning projects');
|
||||
|
||||
if (!this.scanProjects) {
|
||||
await interaction.editReply({ content: 'Project scanning not available.' });
|
||||
|
|
@ -356,7 +356,7 @@ export class DiscordBot {
|
|||
try {
|
||||
projects = await this.scanProjects();
|
||||
} catch (err) {
|
||||
this.logger.error('gsd-start: scan failed', {
|
||||
this.logger.error('sf-start: scan failed', {
|
||||
error: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
await interaction.editReply({ content: 'Failed to scan for projects.' });
|
||||
|
|
@ -371,7 +371,7 @@ export class DiscordBot {
|
|||
// Discord select menus support max 25 options
|
||||
const truncated = projects.slice(0, 25);
|
||||
const select = new StringSelectMenuBuilder()
|
||||
.setCustomId('gsd-start-select')
|
||||
.setCustomId('sf-start-select')
|
||||
.setPlaceholder('Select a project to start')
|
||||
.addOptions(
|
||||
truncated.map((p) => ({
|
||||
|
|
@ -395,7 +395,7 @@ export class DiscordBot {
|
|||
}) as StringSelectMenuInteraction;
|
||||
|
||||
const projectPath = collected.values[0];
|
||||
this.logger.info('gsd-start: project selected', { projectPath });
|
||||
this.logger.info('sf-start: project selected', { projectPath });
|
||||
|
||||
// Defer the update immediately — startSession can take 10-30s to spawn the SF process,
|
||||
// and Discord's component interaction token expires in 3 seconds without deferral.
|
||||
|
|
@ -409,7 +409,7 @@ export class DiscordBot {
|
|||
});
|
||||
} catch (err) {
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
this.logger.error('gsd-start: startSession failed', { error: errMsg, projectPath });
|
||||
this.logger.error('sf-start: startSession failed', { error: errMsg, projectPath });
|
||||
await interaction.editReply({
|
||||
content: `❌ Failed to start session: ${errMsg}`,
|
||||
components: [],
|
||||
|
|
@ -417,18 +417,18 @@ export class DiscordBot {
|
|||
}
|
||||
} catch {
|
||||
// Timeout or other collector error
|
||||
this.logger.info('gsd-start: selection timed out');
|
||||
this.logger.info('sf-start: selection timed out');
|
||||
await interaction.editReply({ content: 'Selection timed out.', components: [] });
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Private: /gsd-stop handler
|
||||
// Private: /sf-stop handler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private async handleGsdStop(interaction: import('discord.js').ChatInputCommandInteraction): Promise<void> {
|
||||
await interaction.deferReply({ ephemeral: true });
|
||||
this.logger.info('gsd-stop: listing sessions');
|
||||
this.logger.info('sf-stop: listing sessions');
|
||||
|
||||
const allSessions = this.sessionManager.getAllSessions();
|
||||
const activeSessions = allSessions.filter(
|
||||
|
|
@ -443,7 +443,7 @@ export class DiscordBot {
|
|||
// Discord select menus support max 25 options
|
||||
const truncated = activeSessions.slice(0, 25);
|
||||
const select = new StringSelectMenuBuilder()
|
||||
.setCustomId('gsd-stop-select')
|
||||
.setCustomId('sf-stop-select')
|
||||
.setPlaceholder('Select a session to stop')
|
||||
.addOptions(
|
||||
truncated.map((s) => ({
|
||||
|
|
@ -466,7 +466,7 @@ export class DiscordBot {
|
|||
}) as StringSelectMenuInteraction;
|
||||
|
||||
const sessionId = collected.values[0];
|
||||
this.logger.info('gsd-stop: session selected', { sessionId });
|
||||
this.logger.info('sf-stop: session selected', { sessionId });
|
||||
|
||||
try {
|
||||
await this.sessionManager.cancelSession(sessionId);
|
||||
|
|
@ -476,7 +476,7 @@ export class DiscordBot {
|
|||
});
|
||||
} catch (err) {
|
||||
const errMsg = err instanceof Error ? err.message : String(err);
|
||||
this.logger.error('gsd-stop: cancelSession failed', { error: errMsg, sessionId });
|
||||
this.logger.error('sf-stop: cancelSession failed', { error: errMsg, sessionId });
|
||||
await collected.update({
|
||||
content: `❌ Failed to stop session: ${errMsg}`,
|
||||
components: [],
|
||||
|
|
@ -484,7 +484,7 @@ export class DiscordBot {
|
|||
}
|
||||
} catch {
|
||||
// Timeout or other collector error
|
||||
this.logger.info('gsd-stop: selection timed out');
|
||||
this.logger.info('sf-stop: selection timed out');
|
||||
await interaction.editReply({ content: 'Selection timed out.', components: [] });
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -31,8 +31,8 @@ afterEach(() => {
|
|||
function basePlistOpts(overrides?: Partial<PlistOptions>): PlistOptions {
|
||||
return {
|
||||
nodePath: '/usr/local/bin/node',
|
||||
scriptPath: '/usr/local/lib/gsd-daemon/dist/cli.js',
|
||||
configPath: join(homedir(), '.gsd', 'daemon.yaml'),
|
||||
scriptPath: '/usr/local/lib/sf-daemon/dist/cli.js',
|
||||
configPath: join(homedir(), '.sf', 'daemon.yaml'),
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
|
@ -69,9 +69,9 @@ describe('generatePlist', () => {
|
|||
assert.ok(xml.includes('</plist>'));
|
||||
});
|
||||
|
||||
it('includes label com.gsd.daemon', () => {
|
||||
it('includes label com.sf.daemon', () => {
|
||||
const xml = generatePlist(basePlistOpts());
|
||||
assert.ok(xml.includes('<string>com.gsd.daemon</string>'));
|
||||
assert.ok(xml.includes('<string>com.sf.daemon</string>'));
|
||||
});
|
||||
|
||||
it('uses the absolute node path from opts', () => {
|
||||
|
|
@ -149,8 +149,8 @@ describe('generatePlist', () => {
|
|||
// ---------- getPlistPath ----------
|
||||
|
||||
describe('getPlistPath', () => {
|
||||
it('returns ~/Library/LaunchAgents/com.gsd.daemon.plist', () => {
|
||||
const expected = join(homedir(), 'Library', 'LaunchAgents', 'com.gsd.daemon.plist');
|
||||
it('returns ~/Library/LaunchAgents/com.sf.daemon.plist', () => {
|
||||
const expected = join(homedir(), 'Library', 'LaunchAgents', 'com.sf.daemon.plist');
|
||||
assert.equal(getPlistPath(), expected);
|
||||
});
|
||||
});
|
||||
|
|
@ -193,7 +193,7 @@ describe('install', () => {
|
|||
// (install is a thin wrapper around generatePlist + writeFile + launchctl)
|
||||
const xml = generatePlist(basePlistOpts());
|
||||
assert.ok(xml.includes('<key>Label</key>'));
|
||||
assert.ok(xml.includes('<string>com.gsd.daemon</string>'));
|
||||
assert.ok(xml.includes('<string>com.sf.daemon</string>'));
|
||||
});
|
||||
|
||||
it('handles idempotent install (unloads first if plist exists)', () => {
|
||||
|
|
@ -266,7 +266,7 @@ describe('uninstall', () => {
|
|||
describe('status', () => {
|
||||
it('parses running daemon output (PID present)', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
return '{\n\t"PID" = 1234;\n\t"Label" = "com.gsd.daemon";\n}\nPID\tStatus\tLabel\n1234\t0\tcom.gsd.daemon\n';
|
||||
return '{\n\t"PID" = 1234;\n\t"Label" = "com.sf.daemon";\n}\nPID\tStatus\tLabel\n1234\t0\tcom.sf.daemon\n';
|
||||
};
|
||||
|
||||
const result = status(mockRun);
|
||||
|
|
@ -277,7 +277,7 @@ describe('status', () => {
|
|||
|
||||
it('parses stopped daemon output (no PID)', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
return 'PID\tStatus\tLabel\n-\t78\tcom.gsd.daemon\n';
|
||||
return 'PID\tStatus\tLabel\n-\t78\tcom.sf.daemon\n';
|
||||
};
|
||||
|
||||
const result = status(mockRun);
|
||||
|
|
@ -288,7 +288,7 @@ describe('status', () => {
|
|||
|
||||
it('returns not-registered when launchctl list fails', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
throw new Error('Could not find service "com.gsd.daemon" in domain for port');
|
||||
throw new Error('Could not find service "com.sf.daemon" in domain for port');
|
||||
};
|
||||
|
||||
const result = status(mockRun);
|
||||
|
|
@ -299,7 +299,7 @@ describe('status', () => {
|
|||
|
||||
it('returns structured result with all fields', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
return 'PID\tStatus\tLabel\n5678\t0\tcom.gsd.daemon\n';
|
||||
return 'PID\tStatus\tLabel\n5678\t0\tcom.sf.daemon\n';
|
||||
};
|
||||
|
||||
const result = status(mockRun);
|
||||
|
|
@ -311,10 +311,10 @@ describe('status', () => {
|
|||
it('parses JSON-style dict output (newer macOS)', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
return `{
|
||||
\t"StandardOutPath" = "/Users/me/.gsd/daemon-stdout.log";
|
||||
\t"StandardOutPath" = "/Users/me/.sf/daemon-stdout.log";
|
||||
\t"LimitLoadToSessionType" = "Aqua";
|
||||
\t"StandardErrorPath" = "/Users/me/.gsd/daemon-stderr.log";
|
||||
\t"Label" = "com.gsd.daemon";
|
||||
\t"StandardErrorPath" = "/Users/me/.sf/daemon-stderr.log";
|
||||
\t"Label" = "com.sf.daemon";
|
||||
\t"OnDemand" = true;
|
||||
\t"LastExitStatus" = 0;
|
||||
\t"PID" = 23802;
|
||||
|
|
@ -331,7 +331,7 @@ describe('status', () => {
|
|||
it('parses JSON-style dict output when daemon stopped (no PID key)', () => {
|
||||
const mockRun: RunCommandFn = (_cmd: string) => {
|
||||
return `{
|
||||
\t"Label" = "com.gsd.daemon";
|
||||
\t"Label" = "com.sf.daemon";
|
||||
\t"LastExitStatus" = 1;
|
||||
\t"OnDemand" = true;
|
||||
};`;
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ export type RunCommandFn = (cmd: string) => string;
|
|||
|
||||
// --------------- constants ---------------
|
||||
|
||||
const LABEL = 'com.gsd.daemon';
|
||||
const LABEL = 'com.sf.daemon';
|
||||
const PLIST_FILENAME = `${LABEL}.plist`;
|
||||
|
||||
// --------------- helpers ---------------
|
||||
|
|
@ -71,8 +71,8 @@ function buildEnvPath(nodePath: string): string {
|
|||
export function generatePlist(opts: PlistOptions): string {
|
||||
const home = homedir();
|
||||
const workDir = opts.workingDirectory ?? home;
|
||||
const stdoutPath = opts.stdoutPath ?? resolve(home, '.gsd', 'daemon-stdout.log');
|
||||
const stderrPath = opts.stderrPath ?? resolve(home, '.gsd', 'daemon-stderr.log');
|
||||
const stdoutPath = opts.stdoutPath ?? resolve(home, '.sf', 'daemon-stdout.log');
|
||||
const stderrPath = opts.stderrPath ?? resolve(home, '.sf', 'daemon-stderr.log');
|
||||
const envPath = buildEnvPath(opts.nodePath);
|
||||
|
||||
// Forward ANTHROPIC_API_KEY so the orchestrator LLM can authenticate.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Tests for Orchestrator — LLM agent for #gsd-control channel.
|
||||
* Tests for Orchestrator — LLM agent for #sf-control channel.
|
||||
*
|
||||
* Uses a MockAnthropicClient that simulates messages.create() responses,
|
||||
* allowing tool execution and conversation flow testing without real API calls.
|
||||
|
|
@ -226,7 +226,7 @@ function makeOrchestrator(opts?: {
|
|||
if (opts?.sessions) sessionManager.sessions = opts.sessions;
|
||||
|
||||
const projects: ProjectInfo[] = opts?.projects ?? [
|
||||
{ name: 'alpha', path: '/home/user/alpha', markers: ['git', 'node', 'gsd'], lastModified: Date.now() },
|
||||
{ name: 'alpha', path: '/home/user/alpha', markers: ['git', 'node', 'sf'], lastModified: Date.now() },
|
||||
{ name: 'bravo', path: '/home/user/bravo', markers: ['git', 'rust'], lastModified: Date.now() },
|
||||
];
|
||||
|
||||
|
|
@ -568,7 +568,7 @@ describe('Orchestrator', () => {
|
|||
const mockClient = new MockAnthropicClient(
|
||||
MockAnthropicClient.toolThenTextHandler(
|
||||
'start_session',
|
||||
{ projectPath: '/p', command: '/gsd quick fix tests' },
|
||||
{ projectPath: '/p', command: '/sf quick fix tests' },
|
||||
'Started',
|
||||
),
|
||||
);
|
||||
|
|
@ -577,7 +577,7 @@ describe('Orchestrator', () => {
|
|||
await orchestrator.handleMessage(msg);
|
||||
|
||||
assert.equal(sessionManager.startSessionCalls.length, 1);
|
||||
assert.equal(sessionManager.startSessionCalls[0]!.command, '/gsd quick fix tests');
|
||||
assert.equal(sessionManager.startSessionCalls[0]!.command, '/sf quick fix tests');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Orchestrator — LLM-powered agent for the #gsd-control Discord channel.
|
||||
* Orchestrator — LLM-powered agent for the #sf-control Discord channel.
|
||||
*
|
||||
* Receives Discord messages, maintains conversation history, calls the
|
||||
* Anthropic messages API with 5 tool definitions (list_projects, start_session,
|
||||
|
|
@ -35,7 +35,7 @@ function resolveAnthropicApiKey(): string {
|
|||
const apiKey = process.env.ANTHROPIC_API_KEY;
|
||||
if (!apiKey) {
|
||||
throw new Error(
|
||||
'ANTHROPIC_API_KEY is required. Set it in your environment or run `gsd config`.',
|
||||
'ANTHROPIC_API_KEY is required. Set it in your environment or run `sf config`.',
|
||||
);
|
||||
}
|
||||
return apiKey;
|
||||
|
|
@ -84,7 +84,7 @@ Response guidelines:
|
|||
const TOOLS: Tool[] = [
|
||||
{
|
||||
name: 'list_projects',
|
||||
description: 'List all detected projects across configured scan roots. Returns project names, paths, and detected markers (git, node, gsd, etc.).',
|
||||
description: 'List all detected projects across configured scan roots. Returns project names, paths, and detected markers (git, node, sf, etc.).',
|
||||
input_schema: {
|
||||
type: 'object' as const,
|
||||
properties: {},
|
||||
|
|
@ -93,12 +93,12 @@ const TOOLS: Tool[] = [
|
|||
},
|
||||
{
|
||||
name: 'start_session',
|
||||
description: 'Start a new SF auto-mode session for a project. Provide the absolute project path. Optionally provide a command to run instead of the default "/gsd auto".',
|
||||
description: 'Start a new SF auto-mode session for a project. Provide the absolute project path. Optionally provide a command to run instead of the default "/sf auto".',
|
||||
input_schema: {
|
||||
type: 'object' as const,
|
||||
properties: {
|
||||
projectPath: { type: 'string', description: 'Absolute path to the project directory' },
|
||||
command: { type: 'string', description: 'Optional command to send instead of "/gsd auto"' },
|
||||
command: { type: 'string', description: 'Optional command to send instead of "/sf auto"' },
|
||||
},
|
||||
required: ['projectPath'],
|
||||
},
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ function createProject(root: string, name: string, markers: string[]): string {
|
|||
for (const marker of markers) {
|
||||
const markerPath = join(projDir, marker);
|
||||
if (marker.startsWith('.') && !marker.includes('.')) {
|
||||
// Likely a directory marker (.git, .gsd)
|
||||
// Likely a directory marker (.git, .sf)
|
||||
mkdirSync(markerPath, { recursive: true });
|
||||
} else {
|
||||
// File marker (package.json, Cargo.toml, etc.)
|
||||
|
|
@ -91,7 +91,7 @@ describe('scanForProjects', () => {
|
|||
const root = tmpDir();
|
||||
cleanupDirs.push(root);
|
||||
|
||||
createProject(root, 'full-stack', ['.git', 'package.json', '.gsd']);
|
||||
createProject(root, 'full-stack', ['.git', 'package.json', '.sf']);
|
||||
|
||||
const results = await scanForProjects([root]);
|
||||
|
||||
|
|
@ -99,7 +99,7 @@ describe('scanForProjects', () => {
|
|||
assert.equal(results[0]!.markers.length, 3);
|
||||
assert.ok(results[0]!.markers.includes('git'));
|
||||
assert.ok(results[0]!.markers.includes('node'));
|
||||
assert.ok(results[0]!.markers.includes('gsd'));
|
||||
assert.ok(results[0]!.markers.includes('sf'));
|
||||
});
|
||||
|
||||
it('returns results sorted alphabetically by name', async () => {
|
||||
|
|
@ -181,7 +181,7 @@ describe('scanForProjects', () => {
|
|||
|
||||
createProject(root, 'git-proj', ['.git']);
|
||||
createProject(root, 'node-proj', ['package.json']);
|
||||
createProject(root, 'gsd-proj', ['.gsd']);
|
||||
createProject(root, 'sf-proj', ['.sf']);
|
||||
createProject(root, 'rust-proj', ['Cargo.toml']);
|
||||
createProject(root, 'python-proj', ['pyproject.toml']);
|
||||
createProject(root, 'go-proj', ['go.mod']);
|
||||
|
|
@ -193,7 +193,7 @@ describe('scanForProjects', () => {
|
|||
const byName = new Map(results.map(r => [r.name, r]));
|
||||
assert.deepEqual(byName.get('git-proj')!.markers, ['git']);
|
||||
assert.deepEqual(byName.get('node-proj')!.markers, ['node']);
|
||||
assert.deepEqual(byName.get('gsd-proj')!.markers, ['gsd']);
|
||||
assert.deepEqual(byName.get('sf-proj')!.markers, ['sf']);
|
||||
assert.deepEqual(byName.get('rust-proj')!.markers, ['rust']);
|
||||
assert.deepEqual(byName.get('python-proj')!.markers, ['python']);
|
||||
assert.deepEqual(byName.get('go-proj')!.markers, ['go']);
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ import type { ProjectInfo, ProjectMarker } from './types.js';
|
|||
const MARKER_MAP: ReadonlyMap<string, ProjectMarker> = new Map([
|
||||
['.git', 'git'],
|
||||
['package.json', 'node'],
|
||||
['.gsd', 'gsd'],
|
||||
['.sf', 'sf'],
|
||||
['Cargo.toml', 'rust'],
|
||||
['pyproject.toml', 'python'],
|
||||
['go.mod', 'go'],
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ class TestableSessionManager extends SessionManager {
|
|||
});
|
||||
|
||||
// Kick off auto-mode
|
||||
const command = options.command ?? '/gsd auto';
|
||||
const command = options.command ?? '/sf auto';
|
||||
await client.prompt(command);
|
||||
|
||||
// Emit lifecycle events (matching parent behavior)
|
||||
|
|
@ -801,11 +801,11 @@ describe('SessionManager', () => {
|
|||
it('sends custom command when provided', async () => {
|
||||
const { manager } = createManager();
|
||||
|
||||
await manager.startSession({ projectDir: '/tmp/custom-cmd', command: '/gsd quick fix-typo' });
|
||||
await manager.startSession({ projectDir: '/tmp/custom-cmd', command: '/sf quick fix-typo' });
|
||||
const client = manager.lastClient!;
|
||||
|
||||
assert.ok(client.prompted.includes('/gsd quick fix-typo'));
|
||||
assert.ok(!client.prompted.includes('/gsd auto'));
|
||||
assert.ok(client.prompted.includes('/sf quick fix-typo'));
|
||||
assert.ok(!client.prompted.includes('/sf auto'));
|
||||
});
|
||||
|
||||
// ---- getSessionByDir returns session by directory lookup ----
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ export class SessionManager extends EventEmitter {
|
|||
*
|
||||
* Rejects if a session already exists for this projectDir.
|
||||
* Creates an RpcClient, starts the process, performs the v2 init handshake,
|
||||
* wires event tracking, and sends '/gsd auto' to begin execution.
|
||||
* wires event tracking, and sends '/sf auto' to begin execution.
|
||||
*/
|
||||
async startSession(options: StartSessionOptions): Promise<string> {
|
||||
const { projectDir } = options;
|
||||
|
|
@ -140,7 +140,7 @@ export class SessionManager extends EventEmitter {
|
|||
});
|
||||
|
||||
// Kick off auto-mode
|
||||
const command = options.command ?? '/gsd auto';
|
||||
const command = options.command ?? '/sf auto';
|
||||
await client.prompt(command);
|
||||
|
||||
this.logger.info('session started', { sessionId: session.sessionId, projectDir: resolvedDir });
|
||||
|
|
@ -278,21 +278,21 @@ export class SessionManager extends EventEmitter {
|
|||
* Resolve the SF CLI path.
|
||||
*
|
||||
* 1. SF_CLI_PATH env var (highest priority)
|
||||
* 2. `which gsd` → resolve to the actual dist/cli.js
|
||||
* 2. `which sf` → resolve to the actual dist/cli.js
|
||||
*/
|
||||
static resolveCLIPath(): string {
|
||||
const envPath = process.env['SF_CLI_PATH'];
|
||||
if (envPath) return resolve(envPath);
|
||||
|
||||
try {
|
||||
const gsdBin = execSync('which gsd', { encoding: 'utf-8' }).trim();
|
||||
const gsdBin = execSync('which sf', { encoding: 'utf-8' }).trim();
|
||||
if (gsdBin) return resolve(gsdBin);
|
||||
} catch {
|
||||
// which failed
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
'Cannot find SF CLI. Set SF_CLI_PATH environment variable or ensure `gsd` is in PATH.'
|
||||
'Cannot find SF CLI. Set SF_CLI_PATH environment variable or ensure `sf` is in PATH.'
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -137,7 +137,7 @@ export interface CostAccumulator {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Marker types detectable by the project scanner */
|
||||
export type ProjectMarker = 'git' | 'node' | 'gsd' | 'rust' | 'python' | 'go';
|
||||
export type ProjectMarker = 'git' | 'node' | 'sf' | 'rust' | 'python' | 'go';
|
||||
|
||||
export interface ProjectInfo {
|
||||
/** Directory name (basename) */
|
||||
|
|
@ -161,7 +161,7 @@ export interface StartSessionOptions {
|
|||
/** Absolute path to the project directory */
|
||||
projectDir: string;
|
||||
|
||||
/** Command to send after '/gsd auto' (default: none) */
|
||||
/** Command to send after '/sf auto' (default: none) */
|
||||
command?: string;
|
||||
|
||||
/** Model ID override */
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ async function main(): Promise<void> {
|
|||
async function cleanup(): Promise<void> {
|
||||
if (cleaningUp) return;
|
||||
cleaningUp = true;
|
||||
process.stderr.write('[gsd-mcp-server] Shutting down...\n');
|
||||
process.stderr.write('[sf-mcp-server] Shutting down...\n');
|
||||
try {
|
||||
await sessionManager.cleanup();
|
||||
} catch {
|
||||
|
|
@ -52,10 +52,10 @@ async function main(): Promise<void> {
|
|||
// Connect and start serving
|
||||
try {
|
||||
await server.connect(transport);
|
||||
process.stderr.write('[gsd-mcp-server] MCP server started on stdio\n');
|
||||
process.stderr.write('[sf-mcp-server] MCP server started on stdio\n');
|
||||
} catch (err) {
|
||||
process.stderr.write(
|
||||
`[gsd-mcp-server] Fatal: failed to start — ${err instanceof Error ? err.message : String(err)}\n`
|
||||
`[sf-mcp-server] Fatal: failed to start — ${err instanceof Error ? err.message : String(err)}\n`
|
||||
);
|
||||
await sessionManager.cleanup();
|
||||
process.exit(1);
|
||||
|
|
@ -64,7 +64,7 @@ async function main(): Promise<void> {
|
|||
|
||||
main().catch((err) => {
|
||||
process.stderr.write(
|
||||
`[gsd-mcp-server] Fatal: ${err instanceof Error ? err.message : String(err)}\n`
|
||||
`[sf-mcp-server] Fatal: ${err instanceof Error ? err.message : String(err)}\n`
|
||||
);
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
|
|||
|
|
@ -172,7 +172,7 @@ class TestableSessionManager extends SessionManager {
|
|||
});
|
||||
|
||||
// Kick off auto-mode
|
||||
const command = options.command ?? '/gsd auto';
|
||||
const command = options.command ?? '/sf auto';
|
||||
await client.prompt(command);
|
||||
|
||||
return session.sessionId;
|
||||
|
|
@ -227,7 +227,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('startSession creates session and returns sessionId', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/test-project', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/test-project', { cliPath: '/usr/bin/sf' });
|
||||
assert.equal(sessionId, 'mock-session-001');
|
||||
|
||||
const session = sm.getSession(sessionId);
|
||||
|
|
@ -236,22 +236,22 @@ describe('SessionManager', () => {
|
|||
assert.equal(session.projectDir, resolve('/tmp/test-project'));
|
||||
});
|
||||
|
||||
it('startSession sends /gsd auto by default', async () => {
|
||||
await sm.startSession('/tmp/test-prompt', { cliPath: '/usr/bin/gsd' });
|
||||
it('startSession sends /sf auto by default', async () => {
|
||||
await sm.startSession('/tmp/test-prompt', { cliPath: '/usr/bin/sf' });
|
||||
assert.ok(sm.lastClient);
|
||||
assert.deepEqual(sm.lastClient.prompted, ['/gsd auto']);
|
||||
assert.deepEqual(sm.lastClient.prompted, ['/sf auto']);
|
||||
});
|
||||
|
||||
it('startSession sends custom command when provided', async () => {
|
||||
await sm.startSession('/tmp/test-cmd', { cliPath: '/usr/bin/gsd', command: '/gsd auto --resume' });
|
||||
await sm.startSession('/tmp/test-cmd', { cliPath: '/usr/bin/sf', command: '/sf auto --resume' });
|
||||
assert.ok(sm.lastClient);
|
||||
assert.deepEqual(sm.lastClient.prompted, ['/gsd auto --resume']);
|
||||
assert.deepEqual(sm.lastClient.prompted, ['/sf auto --resume']);
|
||||
});
|
||||
|
||||
it('startSession rejects duplicate projectDir', async () => {
|
||||
await sm.startSession('/tmp/dup-test', { cliPath: '/usr/bin/gsd' });
|
||||
await sm.startSession('/tmp/dup-test', { cliPath: '/usr/bin/sf' });
|
||||
await assert.rejects(
|
||||
() => sm.startSession('/tmp/dup-test', { cliPath: '/usr/bin/gsd' }),
|
||||
() => sm.startSession('/tmp/dup-test', { cliPath: '/usr/bin/sf' }),
|
||||
(err: Error) => {
|
||||
assert.ok(err.message.includes('Session already active'));
|
||||
return true;
|
||||
|
|
@ -261,7 +261,7 @@ describe('SessionManager', () => {
|
|||
|
||||
it('startSession rejects empty projectDir', async () => {
|
||||
await assert.rejects(
|
||||
() => sm.startSession('', { cliPath: '/usr/bin/gsd' }),
|
||||
() => sm.startSession('', { cliPath: '/usr/bin/sf' }),
|
||||
(err: Error) => {
|
||||
assert.ok(err.message.includes('projectDir is required'));
|
||||
return true;
|
||||
|
|
@ -273,7 +273,7 @@ describe('SessionManager', () => {
|
|||
sm.nextStartError = new Error('spawn failed');
|
||||
|
||||
await assert.rejects(
|
||||
() => sm.startSession('/tmp/fail-start', { cliPath: '/usr/bin/gsd' }),
|
||||
() => sm.startSession('/tmp/fail-start', { cliPath: '/usr/bin/sf' }),
|
||||
(err: Error) => {
|
||||
assert.ok(err.message.includes('Failed to start session'));
|
||||
assert.ok(err.message.includes('spawn failed'));
|
||||
|
|
@ -286,7 +286,7 @@ describe('SessionManager', () => {
|
|||
sm.nextInitError = new Error('handshake failed');
|
||||
|
||||
await assert.rejects(
|
||||
() => sm.startSession('/tmp/fail-init', { cliPath: '/usr/bin/gsd' }),
|
||||
() => sm.startSession('/tmp/fail-init', { cliPath: '/usr/bin/sf' }),
|
||||
(err: Error) => {
|
||||
assert.ok(err.message.includes('Failed to start session'));
|
||||
assert.ok(err.message.includes('handshake failed'));
|
||||
|
|
@ -301,14 +301,14 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('getSessionByDir returns session for known dir', async () => {
|
||||
await sm.startSession('/tmp/by-dir', { cliPath: '/usr/bin/gsd' });
|
||||
await sm.startSession('/tmp/by-dir', { cliPath: '/usr/bin/sf' });
|
||||
const session = sm.getSessionByDir('/tmp/by-dir');
|
||||
assert.ok(session);
|
||||
assert.equal(session.sessionId, 'mock-session-001');
|
||||
});
|
||||
|
||||
it('resolveBlocker errors when no pending blocker', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/no-blocker', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/no-blocker', { cliPath: '/usr/bin/sf' });
|
||||
await assert.rejects(
|
||||
() => sm.resolveBlocker(sessionId, 'some response'),
|
||||
(err: Error) => {
|
||||
|
|
@ -329,7 +329,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('resolveBlocker clears pendingBlocker and sends UI response', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/blocker-resolve', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/blocker-resolve', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
// Simulate a blocking UI request event
|
||||
|
|
@ -354,7 +354,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('cancelSession calls abort + stop on client', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/cancel-test', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/cancel-test', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
await sm.cancelSession(sessionId);
|
||||
|
|
@ -377,8 +377,8 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('cleanup stops all active sessions', async () => {
|
||||
await sm.startSession('/tmp/cleanup-1', { cliPath: '/usr/bin/gsd' });
|
||||
await sm.startSession('/tmp/cleanup-2', { cliPath: '/usr/bin/gsd' });
|
||||
await sm.startSession('/tmp/cleanup-1', { cliPath: '/usr/bin/sf' });
|
||||
await sm.startSession('/tmp/cleanup-2', { cliPath: '/usr/bin/sf' });
|
||||
|
||||
assert.equal(sm.allClients.length, 2);
|
||||
|
||||
|
|
@ -390,7 +390,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('event ring buffer caps at MAX_EVENTS', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/ring-buffer', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/ring-buffer', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
for (let i = 0; i < MAX_EVENTS + 20; i++) {
|
||||
|
|
@ -404,7 +404,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('blocker detection: non-fire-and-forget extension_ui_request sets pendingBlocker', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/blocker-detect', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/blocker-detect', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
// 'select' is not in FIRE_AND_FORGET_METHODS
|
||||
|
|
@ -423,7 +423,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('fire-and-forget methods do not set pendingBlocker', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/fire-forget', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/fire-forget', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
// 'notify' is fire-and-forget — on its own (no terminal prefix) should not block
|
||||
|
|
@ -440,7 +440,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('terminal detection: auto-mode stopped sets status to completed', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/terminal', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/terminal', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
client.emitEvent({
|
||||
|
|
@ -455,7 +455,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('terminal detection with blocked: message sets status to blocked', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/terminal-blocked', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/terminal-blocked', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
client.emitEvent({
|
||||
|
|
@ -471,7 +471,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('cost tracking: cumulative-max from cost_update events', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/cost-track', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/cost-track', { cliPath: '/usr/bin/sf' });
|
||||
const client = sm.lastClient!;
|
||||
|
||||
client.emitEvent({
|
||||
|
|
@ -495,7 +495,7 @@ describe('SessionManager', () => {
|
|||
});
|
||||
|
||||
it('getResult returns HeadlessJsonResult-shaped object', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/result-shape', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/result-shape', { cliPath: '/usr/bin/sf' });
|
||||
const result = sm.getResult(sessionId);
|
||||
|
||||
assert.equal(result.sessionId, sessionId);
|
||||
|
|
@ -539,9 +539,9 @@ describe('SessionManager.resolveCLIPath', () => {
|
|||
});
|
||||
|
||||
it('SF_CLI_PATH env var takes precedence', () => {
|
||||
process.env['SF_CLI_PATH'] = '/custom/path/to/gsd';
|
||||
process.env['SF_CLI_PATH'] = '/custom/path/to/sf';
|
||||
const result = SessionManager.resolveCLIPath();
|
||||
assert.equal(result, resolve('/custom/path/to/gsd'));
|
||||
assert.equal(result, resolve('/custom/path/to/sf'));
|
||||
});
|
||||
|
||||
it('throws when SF_CLI_PATH not set and which fails', () => {
|
||||
|
|
@ -585,13 +585,13 @@ describe('createMcpServer tool registration', () => {
|
|||
});
|
||||
|
||||
it('gsd_execute flow returns sessionId on success', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/tool-exec', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/tool-exec', { cliPath: '/usr/bin/sf' });
|
||||
assert.equal(typeof sessionId, 'string');
|
||||
assert.ok(sessionId.length > 0);
|
||||
});
|
||||
|
||||
it('gsd_status flow returns correct shape', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/tool-status', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/tool-status', { cliPath: '/usr/bin/sf' });
|
||||
const session = sm.getSession(sessionId)!;
|
||||
|
||||
assert.equal(typeof session.status, 'string');
|
||||
|
|
@ -601,7 +601,7 @@ describe('createMcpServer tool registration', () => {
|
|||
});
|
||||
|
||||
it('gsd_resolve_blocker flow returns error when no blocker', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/tool-resolve', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/tool-resolve', { cliPath: '/usr/bin/sf' });
|
||||
await assert.rejects(
|
||||
() => sm.resolveBlocker(sessionId, 'fix'),
|
||||
(err: Error) => {
|
||||
|
|
@ -612,7 +612,7 @@ describe('createMcpServer tool registration', () => {
|
|||
});
|
||||
|
||||
it('gsd_result flow returns HeadlessJsonResult shape', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/tool-result', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/tool-result', { cliPath: '/usr/bin/sf' });
|
||||
const result = sm.getResult(sessionId);
|
||||
|
||||
assert.ok('sessionId' in result);
|
||||
|
|
@ -626,7 +626,7 @@ describe('createMcpServer tool registration', () => {
|
|||
});
|
||||
|
||||
it('gsd_cancel flow marks session as cancelled', async () => {
|
||||
const sessionId = await sm.startSession('/tmp/tool-cancel', { cliPath: '/usr/bin/gsd' });
|
||||
const sessionId = await sm.startSession('/tmp/tool-cancel', { cliPath: '/usr/bin/sf' });
|
||||
await sm.cancelSession(sessionId);
|
||||
const session = sm.getSession(sessionId)!;
|
||||
assert.equal(session.status, 'cancelled');
|
||||
|
|
|
|||
|
|
@ -86,8 +86,8 @@ export function readCaptures(
|
|||
projectDir: string,
|
||||
filter: 'all' | 'pending' | 'actionable' = 'all',
|
||||
): CapturesResult {
|
||||
const gsd = resolveGsdRoot(projectDir);
|
||||
const capturesPath = resolveRootFile(gsd, 'CAPTURES.md');
|
||||
const sf = resolveGsdRoot(projectDir);
|
||||
const capturesPath = resolveRootFile(sf, 'CAPTURES.md');
|
||||
|
||||
if (!existsSync(capturesPath)) {
|
||||
return { captures: [], counts: { total: 0, pending: 0, resolved: 0, actionable: 0 } };
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ function checkProjectLevel(gsdRoot: string, issues: DoctorIssue[]): void {
|
|||
code: 'missing_state_md',
|
||||
scope: 'project',
|
||||
unitId: '',
|
||||
message: 'STATE.md is missing — run /gsd status to regenerate',
|
||||
message: 'STATE.md is missing — run /sf status to regenerate',
|
||||
file: statePath,
|
||||
});
|
||||
}
|
||||
|
|
@ -192,7 +192,7 @@ export function runDoctorLite(projectDir: string, scope?: string): DoctorResult
|
|||
code: 'no_gsd_directory',
|
||||
scope: 'project',
|
||||
unitId: '',
|
||||
message: 'No .gsd/ directory found — project not initialized',
|
||||
message: 'No .sf/ directory found — project not initialized',
|
||||
}],
|
||||
counts: { error: 0, warning: 0, info: 1 },
|
||||
};
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ import type { KnowledgeGraph } from './graph.js';
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
function tmpProject(): string {
|
||||
const dir = join(tmpdir(), `gsd-graph-test-${randomBytes(4).toString('hex')}`);
|
||||
const dir = join(tmpdir(), `sf-graph-test-${randomBytes(4).toString('hex')}`);
|
||||
mkdirSync(dir, { recursive: true });
|
||||
return dir;
|
||||
}
|
||||
|
|
@ -35,7 +35,7 @@ function writeFixture(base: string, relPath: string, content: string): void {
|
|||
}
|
||||
|
||||
function makeProjectWithArtifacts(projectDir: string): void {
|
||||
writeFixture(projectDir, '.gsd/STATE.md', [
|
||||
writeFixture(projectDir, '.sf/STATE.md', [
|
||||
'# SF State',
|
||||
'',
|
||||
'**Active Milestone:** M001: Auth System',
|
||||
|
|
@ -51,7 +51,7 @@ function makeProjectWithArtifacts(projectDir: string): void {
|
|||
'Execute T01 in S01.',
|
||||
].join('\n'));
|
||||
|
||||
writeFixture(projectDir, '.gsd/KNOWLEDGE.md', [
|
||||
writeFixture(projectDir, '.sf/KNOWLEDGE.md', [
|
||||
'# Project Knowledge',
|
||||
'',
|
||||
'## Rules',
|
||||
|
|
@ -74,7 +74,7 @@ function makeProjectWithArtifacts(projectDir: string): void {
|
|||
'| L001 | CI tests failed | Env diff | Added setup script | testing |',
|
||||
].join('\n'));
|
||||
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-ROADMAP.md', [
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-ROADMAP.md', [
|
||||
'# M001: Auth System',
|
||||
'',
|
||||
'## Vision',
|
||||
|
|
@ -88,7 +88,7 @@ function makeProjectWithArtifacts(projectDir: string): void {
|
|||
'| S01 | Login flow | low | — | 🔄 | Users can log in |',
|
||||
].join('\n'));
|
||||
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/S01-PLAN.md', [
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/S01-PLAN.md', [
|
||||
'# S01: Login flow',
|
||||
'',
|
||||
'## Tasks',
|
||||
|
|
@ -103,7 +103,7 @@ function makeProjectWithArtifacts(projectDir: string): void {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
function writeLearningsFixture(projectDir: string, milestoneId: string, content: string): void {
|
||||
writeFixture(projectDir, `.gsd/milestones/${milestoneId}/${milestoneId}-LEARNINGS.md`, content);
|
||||
writeFixture(projectDir, `.sf/milestones/${milestoneId}/${milestoneId}-LEARNINGS.md`, content);
|
||||
}
|
||||
|
||||
const SAMPLE_LEARNINGS = `---
|
||||
|
|
@ -174,14 +174,14 @@ describe('buildGraph', () => {
|
|||
it('skips unparseable artifact and does not throw', async () => {
|
||||
const badProject = tmpProject();
|
||||
// Write a corrupt/minimal STATE.md that is technically valid but empty
|
||||
writeFixture(badProject, '.gsd/STATE.md', 'not valid gsd state at all \0\0\0');
|
||||
writeFixture(badProject, '.sf/STATE.md', 'not valid sf state at all \0\0\0');
|
||||
// Should not throw
|
||||
const graph = await buildGraph(badProject);
|
||||
assert.ok(graph.nodes.length >= 0);
|
||||
rmSync(badProject, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('returns empty graph for project with no .gsd/ directory', async () => {
|
||||
it('returns empty graph for project with no .sf/ directory', async () => {
|
||||
const emptyProject = tmpProject();
|
||||
const graph = await buildGraph(emptyProject);
|
||||
assert.ok(graph.nodes.length >= 0); // no throw
|
||||
|
|
@ -215,7 +215,7 @@ describe('buildGraph — LEARNINGS.md parsing', () => {
|
|||
beforeEach(() => {
|
||||
projectDir = tmpProject();
|
||||
// Create minimal milestone directory so parseMilestoneFiles finds it
|
||||
mkdirSync(join(projectDir, '.gsd', 'milestones', 'M001'), { recursive: true });
|
||||
mkdirSync(join(projectDir, '.sf', 'milestones', 'M001'), { recursive: true });
|
||||
writeLearningsFixture(projectDir, 'M001', SAMPLE_LEARNINGS);
|
||||
});
|
||||
|
||||
|
|
@ -284,7 +284,7 @@ describe('buildGraph — LEARNINGS.md parsing', () => {
|
|||
|
||||
it('skips LEARNINGS.md gracefully when file is malformed', async () => {
|
||||
const badProject = tmpProject();
|
||||
mkdirSync(join(badProject, '.gsd', 'milestones', 'M002'), { recursive: true });
|
||||
mkdirSync(join(badProject, '.sf', 'milestones', 'M002'), { recursive: true });
|
||||
writeLearningsFixture(badProject, 'M002', '\0\0\0 not valid yaml or markdown \0\0\0');
|
||||
// Must not throw
|
||||
const graph = await buildGraph(badProject);
|
||||
|
|
@ -295,7 +295,7 @@ describe('buildGraph — LEARNINGS.md parsing', () => {
|
|||
|
||||
it('produces no learning nodes when all sections are empty', async () => {
|
||||
const emptyProject = tmpProject();
|
||||
mkdirSync(join(emptyProject, '.gsd', 'milestones', 'M003'), { recursive: true });
|
||||
mkdirSync(join(emptyProject, '.sf', 'milestones', 'M003'), { recursive: true });
|
||||
writeLearningsFixture(emptyProject, 'M003', `---
|
||||
phase: "M003"
|
||||
phase_name: "Empty"
|
||||
|
|
@ -332,7 +332,7 @@ missing_artifacts: []
|
|||
|
||||
it('does not crash when LEARNINGS.md is missing entirely', async () => {
|
||||
const noLearningsProject = tmpProject();
|
||||
mkdirSync(join(noLearningsProject, '.gsd', 'milestones', 'M004'), { recursive: true });
|
||||
mkdirSync(join(noLearningsProject, '.sf', 'milestones', 'M004'), { recursive: true });
|
||||
// No LEARNINGS.md file written
|
||||
const graph = await buildGraph(noLearningsProject);
|
||||
assert.ok(graph.nodes.length >= 0);
|
||||
|
|
@ -356,22 +356,22 @@ describe('writeGraph', () => {
|
|||
|
||||
after(() => rmSync(projectDir, { recursive: true, force: true }));
|
||||
|
||||
it('creates graph.json in .gsd/graphs/ after writeGraph()', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
it('creates graph.json in .sf/graphs/ after writeGraph()', async () => {
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
await writeGraph(gsdRoot, graph);
|
||||
const graphPath = join(gsdRoot, 'graphs', 'graph.json');
|
||||
assert.ok(existsSync(graphPath), `Expected ${graphPath} to exist`);
|
||||
});
|
||||
|
||||
it('write is atomic — no temp file remains after writeGraph()', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
await writeGraph(gsdRoot, graph);
|
||||
const tmpPath = join(gsdRoot, 'graphs', 'graph.tmp.json');
|
||||
assert.ok(!existsSync(tmpPath), 'Temp file should not exist after successful write');
|
||||
});
|
||||
|
||||
it('written graph.json is valid JSON with nodes and edges', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
await writeGraph(gsdRoot, graph);
|
||||
const raw = readFileSync(join(gsdRoot, 'graphs', 'graph.json'), 'utf-8');
|
||||
const parsed = JSON.parse(raw) as KnowledgeGraph;
|
||||
|
|
@ -401,7 +401,7 @@ describe('graphStatus', () => {
|
|||
|
||||
it('returns { exists: true, nodeCount, edgeCount, ageHours } when graph exists', async () => {
|
||||
makeProjectWithArtifacts(projectDir);
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
const graph = await buildGraph(projectDir);
|
||||
await writeGraph(gsdRoot, graph);
|
||||
|
||||
|
|
@ -415,7 +415,7 @@ describe('graphStatus', () => {
|
|||
|
||||
it('stale = false for a freshly built graph', async () => {
|
||||
makeProjectWithArtifacts(projectDir);
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
const graph = await buildGraph(projectDir);
|
||||
await writeGraph(gsdRoot, graph);
|
||||
|
||||
|
|
@ -425,7 +425,7 @@ describe('graphStatus', () => {
|
|||
|
||||
it('stale = true for a graph older than 24h (builtAt backdated)', async () => {
|
||||
makeProjectWithArtifacts(projectDir);
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
mkdirSync(join(gsdRoot, 'graphs'), { recursive: true });
|
||||
|
||||
// Write a graph with a builtAt 25 hours ago
|
||||
|
|
@ -456,7 +456,7 @@ describe('graphQuery', () => {
|
|||
before(async () => {
|
||||
projectDir = tmpProject();
|
||||
makeProjectWithArtifacts(projectDir);
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
const graph = await buildGraph(projectDir);
|
||||
await writeGraph(gsdRoot, graph);
|
||||
});
|
||||
|
|
@ -486,7 +486,7 @@ describe('graphQuery', () => {
|
|||
});
|
||||
|
||||
it('budget trims AMBIGUOUS edges first', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
// Write a graph with mixed confidence edges
|
||||
const mixedGraph: KnowledgeGraph = {
|
||||
builtAt: new Date().toISOString(),
|
||||
|
|
@ -523,7 +523,7 @@ describe('graphDiff', () => {
|
|||
beforeEach(async () => {
|
||||
projectDir = tmpProject();
|
||||
makeProjectWithArtifacts(projectDir);
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
const graph = await buildGraph(projectDir);
|
||||
await writeGraph(gsdRoot, graph);
|
||||
});
|
||||
|
|
@ -531,7 +531,7 @@ describe('graphDiff', () => {
|
|||
afterEach(() => rmSync(projectDir, { recursive: true, force: true }));
|
||||
|
||||
it('returns empty diff when comparing graph to itself (snapshot = current)', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
await writeSnapshot(gsdRoot);
|
||||
const diff = await graphDiff(projectDir);
|
||||
assert.ok(Array.isArray(diff.nodes.added));
|
||||
|
|
@ -542,7 +542,7 @@ describe('graphDiff', () => {
|
|||
});
|
||||
|
||||
it('returns added nodes when a new node appears after snapshot', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
// Take snapshot of the original graph
|
||||
await writeSnapshot(gsdRoot);
|
||||
|
||||
|
|
@ -561,7 +561,7 @@ describe('graphDiff', () => {
|
|||
});
|
||||
|
||||
it('returns removed nodes when a node disappears after snapshot', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
// Create snapshot with a node that won't exist in current graph
|
||||
const snapshotGraph: KnowledgeGraph = {
|
||||
builtAt: new Date().toISOString(),
|
||||
|
|
@ -592,7 +592,7 @@ describe('graphDiff', () => {
|
|||
});
|
||||
|
||||
it('writeSnapshot creates .last-build-snapshot.json with snapshotAt', async () => {
|
||||
const gsdRoot = join(projectDir, '.gsd');
|
||||
const gsdRoot = join(projectDir, '.sf');
|
||||
await writeSnapshot(gsdRoot);
|
||||
const snapshotPath = join(gsdRoot, 'graphs', '.last-build-snapshot.json');
|
||||
assert.ok(existsSync(snapshotPath));
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
/**
|
||||
* Knowledge Graph for SF projects.
|
||||
*
|
||||
* Parses .gsd/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,
|
||||
* Parses .sf/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,
|
||||
* KNOWLEDGE.md) into a graph of nodes and edges. Parse errors in any
|
||||
* single artifact are caught and never propagate — the artifact is skipped
|
||||
* and the rest of the graph is returned.
|
||||
|
|
@ -537,7 +537,7 @@ function parseLearningsSection(
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Build a KnowledgeGraph by parsing all .gsd/ artifacts.
|
||||
* Build a KnowledgeGraph by parsing all .sf/ artifacts.
|
||||
*
|
||||
* Parse errors in any single artifact are caught — the artifact is skipped
|
||||
* and never causes buildGraph() to throw.
|
||||
|
|
@ -590,7 +590,7 @@ export async function buildGraph(projectDir: string): Promise<KnowledgeGraph> {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Write the graph to .gsd/graphs/graph.json atomically.
|
||||
* Write the graph to .sf/graphs/graph.json atomically.
|
||||
*
|
||||
* Writes to graph.tmp.json first, then renames to graph.json.
|
||||
* Creates the graphs/ directory if it does not exist.
|
||||
|
|
|
|||
|
|
@ -90,8 +90,8 @@ function parseKnowledgeMarkdown(content: string): KnowledgeEntry[] {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function readKnowledge(projectDir: string): KnowledgeResult {
|
||||
const gsd = resolveGsdRoot(projectDir);
|
||||
const knowledgePath = resolveRootFile(gsd, 'KNOWLEDGE.md');
|
||||
const sf = resolveGsdRoot(projectDir);
|
||||
const knowledgePath = resolveRootFile(sf, 'KNOWLEDGE.md');
|
||||
|
||||
if (!existsSync(knowledgePath)) {
|
||||
return { entries: [], counts: { rules: 0, patterns: 0, lessons: 0 } };
|
||||
|
|
|
|||
|
|
@ -72,10 +72,10 @@ function parseMetricsJson(content: string): MetricsUnit[] {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function readHistory(projectDir: string, limit?: number): HistoryResult {
|
||||
const gsd = resolveGsdRoot(projectDir);
|
||||
const sf = resolveGsdRoot(projectDir);
|
||||
|
||||
// metrics.json (primary)
|
||||
const metricsPath = resolveRootFile(gsd, 'metrics.json');
|
||||
const metricsPath = resolveRootFile(sf, 'metrics.json');
|
||||
let units: MetricsUnit[] = [];
|
||||
|
||||
if (existsSync(metricsPath)) {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
// SF MCP Server — .gsd/ directory resolution
|
||||
// SF MCP Server — .sf/ directory resolution
|
||||
// Copyright (c) 2026 Jeremy McSpadden <jeremy@fluxlabs.net>
|
||||
|
||||
import { existsSync, statSync, readdirSync } from 'node:fs';
|
||||
|
|
@ -6,19 +6,19 @@ import { join, resolve, dirname, basename } from 'node:path';
|
|||
import { execFileSync } from 'node:child_process';
|
||||
|
||||
/**
|
||||
* Resolve the .gsd/ root directory for a project.
|
||||
* Resolve the .sf/ root directory for a project.
|
||||
*
|
||||
* Probes in order:
|
||||
* 1. projectDir/.gsd (fast path)
|
||||
* 2. git repo root/.gsd
|
||||
* 1. projectDir/.sf (fast path)
|
||||
* 2. git repo root/.sf
|
||||
* 3. Walk up from projectDir
|
||||
* 4. Fallback: projectDir/.gsd (even if missing — for init)
|
||||
* 4. Fallback: projectDir/.sf (even if missing — for init)
|
||||
*/
|
||||
export function resolveGsdRoot(projectDir: string): string {
|
||||
const resolved = resolve(projectDir);
|
||||
|
||||
// Fast path: .gsd/ in the given directory
|
||||
const direct = join(resolved, '.gsd');
|
||||
// Fast path: .sf/ in the given directory
|
||||
const direct = join(resolved, '.sf');
|
||||
if (existsSync(direct) && statSync(direct).isDirectory()) {
|
||||
return direct;
|
||||
}
|
||||
|
|
@ -30,7 +30,7 @@ export function resolveGsdRoot(projectDir: string): string {
|
|||
encoding: 'utf-8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
}).trim();
|
||||
const gitGsd = join(gitRoot, '.gsd');
|
||||
const gitGsd = join(gitRoot, '.sf');
|
||||
if (existsSync(gitGsd) && statSync(gitGsd).isDirectory()) {
|
||||
return gitGsd;
|
||||
}
|
||||
|
|
@ -41,7 +41,7 @@ export function resolveGsdRoot(projectDir: string): string {
|
|||
// Walk up from projectDir
|
||||
let dir = resolved;
|
||||
while (dir !== dirname(dir)) {
|
||||
const candidate = join(dir, '.gsd');
|
||||
const candidate = join(dir, '.sf');
|
||||
if (existsSync(candidate) && statSync(candidate).isDirectory()) {
|
||||
return candidate;
|
||||
}
|
||||
|
|
@ -52,7 +52,7 @@ export function resolveGsdRoot(projectDir: string): string {
|
|||
return direct;
|
||||
}
|
||||
|
||||
/** Resolve path to a .gsd/ root file (STATE.md, KNOWLEDGE.md, etc.) */
|
||||
/** Resolve path to a .sf/ root file (STATE.md, KNOWLEDGE.md, etc.) */
|
||||
export function resolveRootFile(gsdRoot: string, name: string): string {
|
||||
return join(gsdRoot, name);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ import { runDoctorLite } from './doctor-lite.js';
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
function tmpProject(): string {
|
||||
const dir = join(tmpdir(), `gsd-mcp-test-${randomBytes(4).toString('hex')}`);
|
||||
const dir = join(tmpdir(), `sf-mcp-test-${randomBytes(4).toString('hex')}`);
|
||||
mkdirSync(dir, { recursive: true });
|
||||
return dir;
|
||||
}
|
||||
|
|
@ -41,7 +41,7 @@ describe('readProgress', () => {
|
|||
before(() => {
|
||||
projectDir = tmpProject();
|
||||
|
||||
writeFixture(projectDir, '.gsd/STATE.md', `# SF State
|
||||
writeFixture(projectDir, '.sf/STATE.md', `# SF State
|
||||
|
||||
**Active Milestone:** M002: Auth System
|
||||
**Active Slice:** S01: Login flow
|
||||
|
|
@ -64,16 +64,16 @@ Execute T02 in S01 — implement token refresh.
|
|||
`);
|
||||
|
||||
// Create filesystem structure
|
||||
const m1 = '.gsd/milestones/M001/slices/S01/tasks';
|
||||
const m1 = '.sf/milestones/M001/slices/S01/tasks';
|
||||
writeFixture(projectDir, `${m1}/T01-PLAN.md`, '# T01');
|
||||
writeFixture(projectDir, `${m1}/T01-SUMMARY.md`, '# T01 done');
|
||||
|
||||
const m2 = '.gsd/milestones/M002/slices/S01/tasks';
|
||||
const m2 = '.sf/milestones/M002/slices/S01/tasks';
|
||||
writeFixture(projectDir, `${m2}/T01-PLAN.md`, '# T01');
|
||||
writeFixture(projectDir, `${m2}/T01-SUMMARY.md`, '# T01 done');
|
||||
writeFixture(projectDir, `${m2}/T02-PLAN.md`, '# T02');
|
||||
|
||||
mkdirSync(join(projectDir, '.gsd/milestones/M003'), { recursive: true });
|
||||
mkdirSync(join(projectDir, '.sf/milestones/M003'), { recursive: true });
|
||||
});
|
||||
|
||||
after(() => rmSync(projectDir, { recursive: true, force: true }));
|
||||
|
|
@ -126,7 +126,7 @@ Execute T02 in S01 — implement token refresh.
|
|||
assert.ok(result.nextAction.includes('T02'));
|
||||
});
|
||||
|
||||
it('returns defaults for missing .gsd/', () => {
|
||||
it('returns defaults for missing .sf/', () => {
|
||||
const empty = tmpProject();
|
||||
const result = readProgress(empty);
|
||||
assert.equal(result.phase, 'unknown');
|
||||
|
|
@ -145,8 +145,8 @@ describe('readRoadmap', () => {
|
|||
before(() => {
|
||||
projectDir = tmpProject();
|
||||
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-CONTEXT.md', '# M001: Core Setup\n');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-ROADMAP.md', `# M001: Core Setup
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-CONTEXT.md', '# M001: Core Setup\n');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-ROADMAP.md', `# M001: Core Setup
|
||||
|
||||
## Vision
|
||||
|
||||
|
|
@ -160,27 +160,27 @@ Build the foundation for the project.
|
|||
| S02 | API endpoints | medium | S01 | 🟫 | REST API live |
|
||||
`);
|
||||
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/S01-PLAN.md', `# S01: Database schema
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/S01-PLAN.md', `# S01: Database schema
|
||||
|
||||
## Tasks
|
||||
|
||||
- [x] **T01: Create migrations** — Set up schema
|
||||
- [x] **T02: Seed data** — Initial seed
|
||||
`);
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T02-PLAN.md', '# T02');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T02-SUMMARY.md', '# T02 done');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T02-PLAN.md', '# T02');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T02-SUMMARY.md', '# T02 done');
|
||||
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S02/S02-PLAN.md', `# S02: API endpoints
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S02/S02-PLAN.md', `# S02: API endpoints
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] **T01: Auth routes** — Implement auth
|
||||
- [ ] **T02: User routes** — CRUD users
|
||||
`);
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S02/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S02/tasks/T02-PLAN.md', '# T02');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S02/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S02/tasks/T02-PLAN.md', '# T02');
|
||||
});
|
||||
|
||||
after(() => rmSync(projectDir, { recursive: true, force: true }));
|
||||
|
|
@ -235,7 +235,7 @@ describe('readHistory', () => {
|
|||
|
||||
before(() => {
|
||||
projectDir = tmpProject();
|
||||
writeFixture(projectDir, '.gsd/metrics.json', JSON.stringify({
|
||||
writeFixture(projectDir, '.sf/metrics.json', JSON.stringify({
|
||||
version: 1,
|
||||
projectStartedAt: 1700000000000,
|
||||
units: [
|
||||
|
|
@ -288,7 +288,7 @@ describe('readHistory', () => {
|
|||
|
||||
it('returns empty for missing metrics', () => {
|
||||
const empty = tmpProject();
|
||||
mkdirSync(join(empty, '.gsd'), { recursive: true });
|
||||
mkdirSync(join(empty, '.sf'), { recursive: true });
|
||||
const result = readHistory(empty);
|
||||
assert.equal(result.entries.length, 0);
|
||||
assert.equal(result.totals.units, 0);
|
||||
|
|
@ -305,7 +305,7 @@ describe('readCaptures', () => {
|
|||
|
||||
before(() => {
|
||||
projectDir = tmpProject();
|
||||
writeFixture(projectDir, '.gsd/CAPTURES.md', `# Captures
|
||||
writeFixture(projectDir, '.sf/CAPTURES.md', `# Captures
|
||||
|
||||
### CAP-aaa11111
|
||||
|
||||
|
|
@ -365,7 +365,7 @@ describe('readCaptures', () => {
|
|||
|
||||
it('returns empty for missing CAPTURES.md', () => {
|
||||
const empty = tmpProject();
|
||||
mkdirSync(join(empty, '.gsd'), { recursive: true });
|
||||
mkdirSync(join(empty, '.sf'), { recursive: true });
|
||||
const result = readCaptures(empty);
|
||||
assert.equal(result.captures.length, 0);
|
||||
rmSync(empty, { recursive: true, force: true });
|
||||
|
|
@ -381,7 +381,7 @@ describe('readKnowledge', () => {
|
|||
|
||||
before(() => {
|
||||
projectDir = tmpProject();
|
||||
writeFixture(projectDir, '.gsd/KNOWLEDGE.md', `# Project Knowledge
|
||||
writeFixture(projectDir, '.sf/KNOWLEDGE.md', `# Project Knowledge
|
||||
|
||||
## Rules
|
||||
|
||||
|
|
@ -429,7 +429,7 @@ describe('readKnowledge', () => {
|
|||
|
||||
it('returns empty for missing KNOWLEDGE.md', () => {
|
||||
const empty = tmpProject();
|
||||
mkdirSync(join(empty, '.gsd'), { recursive: true });
|
||||
mkdirSync(join(empty, '.sf'), { recursive: true });
|
||||
const result = readKnowledge(empty);
|
||||
assert.equal(result.entries.length, 0);
|
||||
rmSync(empty, { recursive: true, force: true });
|
||||
|
|
@ -447,24 +447,24 @@ describe('runDoctorLite', () => {
|
|||
projectDir = tmpProject();
|
||||
|
||||
// M001: complete milestone (has summary)
|
||||
writeFixture(projectDir, '.gsd/PROJECT.md', '# Test Project');
|
||||
writeFixture(projectDir, '.gsd/STATE.md', '# SF State');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-CONTEXT.md', '# M001');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-ROADMAP.md', '# Roadmap');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/M001-SUMMARY.md', '# Done');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/S01-PLAN.md', '# Plan');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.gsd/milestones/M001/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
writeFixture(projectDir, '.sf/PROJECT.md', '# Test Project');
|
||||
writeFixture(projectDir, '.sf/STATE.md', '# SF State');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-CONTEXT.md', '# M001');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-ROADMAP.md', '# Roadmap');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/M001-SUMMARY.md', '# Done');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/S01-PLAN.md', '# Plan');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.sf/milestones/M001/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
|
||||
// M002: incomplete — has all tasks done but no SUMMARY
|
||||
writeFixture(projectDir, '.gsd/milestones/M002/M002-CONTEXT.md', '# M002');
|
||||
writeFixture(projectDir, '.gsd/milestones/M002/M002-ROADMAP.md', '# Roadmap');
|
||||
writeFixture(projectDir, '.gsd/milestones/M002/slices/S01/S01-PLAN.md', '# Plan');
|
||||
writeFixture(projectDir, '.gsd/milestones/M002/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.gsd/milestones/M002/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
writeFixture(projectDir, '.sf/milestones/M002/M002-CONTEXT.md', '# M002');
|
||||
writeFixture(projectDir, '.sf/milestones/M002/M002-ROADMAP.md', '# Roadmap');
|
||||
writeFixture(projectDir, '.sf/milestones/M002/slices/S01/S01-PLAN.md', '# Plan');
|
||||
writeFixture(projectDir, '.sf/milestones/M002/slices/S01/tasks/T01-PLAN.md', '# T01');
|
||||
writeFixture(projectDir, '.sf/milestones/M002/slices/S01/tasks/T01-SUMMARY.md', '# T01 done');
|
||||
|
||||
// M003: empty — no context, no slices
|
||||
mkdirSync(join(projectDir, '.gsd/milestones/M003'), { recursive: true });
|
||||
mkdirSync(join(projectDir, '.sf/milestones/M003'), { recursive: true });
|
||||
});
|
||||
|
||||
after(() => rmSync(projectDir, { recursive: true, force: true }));
|
||||
|
|
@ -492,14 +492,14 @@ describe('runDoctorLite', () => {
|
|||
|
||||
it('returns ok:true for healthy project', () => {
|
||||
const healthy = tmpProject();
|
||||
writeFixture(healthy, '.gsd/PROJECT.md', '# Project');
|
||||
writeFixture(healthy, '.gsd/STATE.md', '# State');
|
||||
writeFixture(healthy, '.sf/PROJECT.md', '# Project');
|
||||
writeFixture(healthy, '.sf/STATE.md', '# State');
|
||||
const result = runDoctorLite(healthy);
|
||||
assert.equal(result.ok, true);
|
||||
rmSync(healthy, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('handles missing .gsd/ gracefully', () => {
|
||||
it('handles missing .sf/ gracefully', () => {
|
||||
const empty = tmpProject();
|
||||
const result = runDoctorLite(empty);
|
||||
assert.equal(result.ok, true);
|
||||
|
|
|
|||
|
|
@ -182,8 +182,8 @@ function readVision(gsdRoot: string, mid: string): string {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function readRoadmap(projectDir: string, filterMilestoneId?: string): RoadmapResult {
|
||||
const gsd = resolveGsdRoot(projectDir);
|
||||
let milestoneIds = findMilestoneIds(gsd);
|
||||
const sf = resolveGsdRoot(projectDir);
|
||||
let milestoneIds = findMilestoneIds(sf);
|
||||
|
||||
if (filterMilestoneId) {
|
||||
milestoneIds = milestoneIds.filter((id) => id === filterMilestoneId);
|
||||
|
|
@ -192,19 +192,19 @@ export function readRoadmap(projectDir: string, filterMilestoneId?: string): Roa
|
|||
const milestones: MilestoneInfo[] = [];
|
||||
|
||||
for (const mid of milestoneIds) {
|
||||
const title = readMilestoneTitle(gsd, mid);
|
||||
const vision = readVision(gsd, mid);
|
||||
const title = readMilestoneTitle(sf, mid);
|
||||
const vision = readVision(sf, mid);
|
||||
|
||||
const summaryPath = resolveMilestoneFile(gsd, mid, 'SUMMARY');
|
||||
const summaryPath = resolveMilestoneFile(sf, mid, 'SUMMARY');
|
||||
const hasSummary = summaryPath !== null && existsSync(summaryPath);
|
||||
|
||||
const roadmapPath = resolveMilestoneFile(gsd, mid, 'ROADMAP');
|
||||
const roadmapPath = resolveMilestoneFile(sf, mid, 'ROADMAP');
|
||||
let roadmapSlices: ReturnType<typeof parseRoadmapTable> = [];
|
||||
if (roadmapPath && existsSync(roadmapPath)) {
|
||||
roadmapSlices = parseRoadmapTable(readFileSync(roadmapPath, 'utf-8'));
|
||||
}
|
||||
|
||||
const fsSliceIds = findSliceIds(gsd, mid);
|
||||
const fsSliceIds = findSliceIds(sf, mid);
|
||||
const sliceIdSet = new Set([
|
||||
...roadmapSlices.map((s) => s.id),
|
||||
...fsSliceIds,
|
||||
|
|
@ -213,9 +213,9 @@ export function readRoadmap(projectDir: string, filterMilestoneId?: string): Roa
|
|||
const slices: SliceInfo[] = [];
|
||||
for (const sid of Array.from(sliceIdSet).sort()) {
|
||||
const roadmapEntry = roadmapSlices.find((s) => s.id === sid);
|
||||
const taskFiles = findTaskFiles(gsd, mid, sid);
|
||||
const taskFiles = findTaskFiles(sf, mid, sid);
|
||||
|
||||
const planPath = resolveSliceFile(gsd, mid, sid, 'PLAN');
|
||||
const planPath = resolveSliceFile(sf, mid, sid, 'PLAN');
|
||||
let planTasks: ReturnType<typeof parseSlicePlanTasks> = [];
|
||||
if (planPath && existsSync(planPath)) {
|
||||
planTasks = parseSlicePlanTasks(readFileSync(planPath, 'utf-8'));
|
||||
|
|
|
|||
|
|
@ -158,8 +158,8 @@ function countSlicesAndTasks(gsdRoot: string, milestoneIds: string[]): {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function readProgress(projectDir: string): ProgressResult {
|
||||
const gsd = resolveGsdRoot(projectDir);
|
||||
const statePath = resolveRootFile(gsd, 'STATE.md');
|
||||
const sf = resolveGsdRoot(projectDir);
|
||||
const statePath = resolveRootFile(sf, 'STATE.md');
|
||||
|
||||
// Defaults
|
||||
const result: ProgressResult = {
|
||||
|
|
@ -177,10 +177,10 @@ export function readProgress(projectDir: string): ProgressResult {
|
|||
|
||||
if (!existsSync(statePath)) {
|
||||
// No STATE.md — derive from filesystem only
|
||||
const milestoneIds = findMilestoneIds(gsd);
|
||||
const milestoneIds = findMilestoneIds(sf);
|
||||
result.milestones.total = milestoneIds.length;
|
||||
result.milestones.pending = milestoneIds.length;
|
||||
const counts = countSlicesAndTasks(gsd, milestoneIds);
|
||||
const counts = countSlicesAndTasks(sf, milestoneIds);
|
||||
result.slices = counts.slices;
|
||||
result.tasks = counts.tasks;
|
||||
return result;
|
||||
|
|
@ -208,14 +208,14 @@ export function readProgress(projectDir: string): ProgressResult {
|
|||
result.milestones.done - result.milestones.active - result.milestones.parked;
|
||||
} else {
|
||||
// Fallback: count directories
|
||||
const milestoneIds = findMilestoneIds(gsd);
|
||||
const milestoneIds = findMilestoneIds(sf);
|
||||
result.milestones.total = milestoneIds.length;
|
||||
result.milestones.pending = milestoneIds.length;
|
||||
}
|
||||
|
||||
// Slice/task counts from filesystem
|
||||
const milestoneIds = findMilestoneIds(gsd);
|
||||
const counts = countSlicesAndTasks(gsd, milestoneIds);
|
||||
const milestoneIds = findMilestoneIds(sf);
|
||||
const counts = countSlicesAndTasks(sf, milestoneIds);
|
||||
result.slices = counts.slices;
|
||||
result.tasks = counts.tasks;
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ import { applySecrets, checkExistingEnvKeys, detectDestination } from './env-wri
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
const MCP_PKG = '@modelcontextprotocol/sdk';
|
||||
const SERVER_NAME = 'gsd';
|
||||
const SERVER_NAME = 'sf';
|
||||
const SERVER_VERSION = '2.53.0';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -82,7 +82,7 @@ function normalizeQuery(query: string | undefined): QueryCategory {
|
|||
}
|
||||
|
||||
async function readProjectState(projectDir: string, query: string | undefined): Promise<Record<string, unknown>> {
|
||||
const gsdDir = join(resolve(projectDir), '.gsd');
|
||||
const gsdDir = join(resolve(projectDir), '.sf');
|
||||
const category = normalizeQuery(query);
|
||||
const wanted = new Set<ProjectStateField>(QUERY_FIELDS[category]);
|
||||
|
||||
|
|
@ -367,7 +367,7 @@ export async function createMcpServer(sessionManager: SessionManager): Promise<{
|
|||
'Start a SF auto-mode session for a project directory. Returns a sessionId for tracking.',
|
||||
{
|
||||
projectDir: z.string().describe('Absolute path to the project directory'),
|
||||
command: z.string().optional().describe('Command to send (default: "/gsd auto")'),
|
||||
command: z.string().optional().describe('Command to send (default: "/sf auto")'),
|
||||
model: z.string().optional().describe('Model ID override'),
|
||||
bare: z.boolean().optional().describe('Run in bare mode (skip user config)'),
|
||||
},
|
||||
|
|
@ -689,7 +689,7 @@ export async function createMcpServer(sessionManager: SessionManager): Promise<{
|
|||
// -----------------------------------------------------------------------
|
||||
server.tool(
|
||||
'gsd_progress',
|
||||
'Get structured project progress: active milestone/slice/task, phase, completion counts, blockers, and next action. No session required — reads directly from .gsd/ on disk.',
|
||||
'Get structured project progress: active milestone/slice/task, phase, completion counts, blockers, and next action. No session required — reads directly from .sf/ on disk.',
|
||||
{
|
||||
projectDir: z.string().describe('Absolute path to the project directory'),
|
||||
},
|
||||
|
|
@ -748,7 +748,7 @@ export async function createMcpServer(sessionManager: SessionManager): Promise<{
|
|||
// -----------------------------------------------------------------------
|
||||
server.tool(
|
||||
'gsd_doctor',
|
||||
'Run a lightweight structural health check on the .gsd/ directory. Checks for missing files, status inconsistencies, and orphaned state. No session required.',
|
||||
'Run a lightweight structural health check on the .sf/ directory. Checks for missing files, status inconsistencies, and orphaned state. No session required.',
|
||||
{
|
||||
projectDir: z.string().describe('Absolute path to the project directory'),
|
||||
scope: z.string().optional().describe('Limit checks to a specific milestone (e.g. "M001")'),
|
||||
|
|
@ -806,7 +806,7 @@ export async function createMcpServer(sessionManager: SessionManager): Promise<{
|
|||
// gsd_graph — knowledge graph for SF projects
|
||||
//
|
||||
// Modes:
|
||||
// build Parse .gsd/ artifacts and write graph.json atomically.
|
||||
// build Parse .sf/ artifacts and write graph.json atomically.
|
||||
// query Search the graph for nodes matching a term (BFS, budget-trimmed).
|
||||
// status Check whether graph.json exists and whether it is stale (>24h).
|
||||
// diff Compare graph.json with the last build snapshot.
|
||||
|
|
@ -817,8 +817,8 @@ export async function createMcpServer(sessionManager: SessionManager): Promise<{
|
|||
'Manage the SF project knowledge graph. No session required.',
|
||||
'',
|
||||
'Modes:',
|
||||
' build Parse .gsd/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,',
|
||||
' KNOWLEDGE.md) and write .gsd/graphs/graph.json atomically.',
|
||||
' build Parse .sf/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,',
|
||||
' KNOWLEDGE.md) and write .sf/graphs/graph.json atomically.',
|
||||
' query Search graph nodes by term (BFS from seed matches, budget-trimmed).',
|
||||
' Returns matching nodes and reachable edges within the token budget.',
|
||||
' status Show whether graph.json exists, its age, node/edge counts, and',
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ export class SessionManager {
|
|||
*
|
||||
* Rejects if a session already exists for this projectDir.
|
||||
* Creates an RpcClient, starts the process, performs the v2 init handshake,
|
||||
* wires event tracking, and sends '/gsd auto' to begin execution.
|
||||
* wires event tracking, and sends '/sf auto' to begin execution.
|
||||
*/
|
||||
async startSession(projectDir: string, options: ExecuteOptions = {}): Promise<string> {
|
||||
if (!projectDir || projectDir.trim() === '') {
|
||||
|
|
@ -125,7 +125,7 @@ export class SessionManager {
|
|||
});
|
||||
|
||||
// Kick off auto-mode
|
||||
const command = options.command ?? '/gsd auto';
|
||||
const command = options.command ?? '/sf auto';
|
||||
await client.prompt(command);
|
||||
|
||||
return session.sessionId;
|
||||
|
|
@ -240,18 +240,18 @@ export class SessionManager {
|
|||
* Resolve the SF CLI path.
|
||||
*
|
||||
* 1. SF_CLI_PATH env var (highest priority)
|
||||
* 2. `which gsd` → resolve to the actual dist/cli.js
|
||||
* 2. `which sf` → resolve to the actual dist/cli.js
|
||||
*/
|
||||
static resolveCLIPath(): string {
|
||||
// Check env var first
|
||||
const envPath = process.env['SF_CLI_PATH'];
|
||||
if (envPath) return resolve(envPath);
|
||||
|
||||
// Fallback: locate `gsd` via which
|
||||
// Fallback: locate `sf` via which
|
||||
try {
|
||||
const gsdBin = execSync('which gsd', { encoding: 'utf-8' }).trim();
|
||||
const gsdBin = execSync('which sf', { encoding: 'utf-8' }).trim();
|
||||
if (gsdBin) {
|
||||
// gsd bin is typically a symlink to dist/loader.js — return the resolved path
|
||||
// sf bin is typically a symlink to dist/loader.js — return the resolved path
|
||||
return resolve(gsdBin);
|
||||
}
|
||||
} catch {
|
||||
|
|
@ -259,7 +259,7 @@ export class SessionManager {
|
|||
}
|
||||
|
||||
throw new Error(
|
||||
'Cannot find SF CLI. Set SF_CLI_PATH environment variable or ensure `gsd` is in PATH.'
|
||||
'Cannot find SF CLI. Set SF_CLI_PATH environment variable or ensure `sf` is in PATH.'
|
||||
);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import { loadStoredCredentialEnvKeys, resolveAuthPath } from "./tool-credentials
|
|||
|
||||
describe("tool credentials", () => {
|
||||
it("hydrates supported model and tool keys from auth.json", () => {
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "gsd-mcp-auth-"));
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "sf-mcp-auth-"));
|
||||
const authPath = join(tempRoot, "auth.json");
|
||||
const env: NodeJS.ProcessEnv = {};
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ describe("tool credentials", () => {
|
|||
});
|
||||
|
||||
it("does not overwrite explicit environment variables", () => {
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "gsd-mcp-auth-"));
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "sf-mcp-auth-"));
|
||||
const authPath = join(tempRoot, "auth.json");
|
||||
const env: NodeJS.ProcessEnv = {
|
||||
BRAVE_API_KEY: "already-set",
|
||||
|
|
@ -59,7 +59,7 @@ describe("tool credentials", () => {
|
|||
});
|
||||
|
||||
it("ignores oauth credentials because they are resolved through auth storage, not env hydration", () => {
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "gsd-mcp-auth-"));
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "sf-mcp-auth-"));
|
||||
const authPath = join(tempRoot, "auth.json");
|
||||
const env: NodeJS.ProcessEnv = {};
|
||||
|
||||
|
|
@ -79,7 +79,7 @@ describe("tool credentials", () => {
|
|||
});
|
||||
|
||||
it("resolves auth.json from SF_CODING_AGENT_DIR", () => {
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "gsd-mcp-agent-dir-"));
|
||||
const tempRoot = mkdtempSync(join(tmpdir(), "sf-mcp-agent-dir-"));
|
||||
const agentDir = join(tempRoot, "agent");
|
||||
mkdirSync(agentDir, { recursive: true });
|
||||
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ function getStoredApiKey(data: AuthStorageData, providerId: string): string | un
|
|||
export function resolveAuthPath(env: NodeJS.ProcessEnv = process.env): string {
|
||||
const agentDir = env.SF_CODING_AGENT_DIR?.trim();
|
||||
if (agentDir) return join(expandHome(agentDir), "auth.json");
|
||||
return join(homedir(), ".gsd", "agent", "auth.json");
|
||||
return join(homedir(), ".sf", "agent", "auth.json");
|
||||
}
|
||||
|
||||
export function loadStoredCredentialEnvKeys(options: {
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ export interface CostAccumulator {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ExecuteOptions {
|
||||
/** Command to send after '/gsd auto' (default: none) */
|
||||
/** Command to send after '/sf auto' (default: none) */
|
||||
command?: string;
|
||||
|
||||
/** Model ID override */
|
||||
|
|
|
|||
|
|
@ -9,8 +9,8 @@ import { _getAdapter, closeDatabase } from "../../../src/resources/extensions/sf
|
|||
import { registerWorkflowTools, WORKFLOW_TOOL_NAMES } from "./workflow-tools.ts";
|
||||
|
||||
function makeTmpBase(): string {
|
||||
const base = join(tmpdir(), `gsd-mcp-workflow-${randomUUID()}`);
|
||||
mkdirSync(join(base, ".gsd"), { recursive: true });
|
||||
const base = join(tmpdir(), `sf-mcp-workflow-${randomUUID()}`);
|
||||
mkdirSync(join(base, ".sf"), { recursive: true });
|
||||
return base;
|
||||
}
|
||||
|
||||
|
|
@ -31,9 +31,9 @@ function writeWriteGateSnapshot(
|
|||
base: string,
|
||||
snapshot: { verifiedDepthMilestones?: string[]; activeQueuePhase?: boolean; pendingGateId?: string | null },
|
||||
): void {
|
||||
mkdirSync(join(base, ".gsd", "runtime"), { recursive: true });
|
||||
mkdirSync(join(base, ".sf", "runtime"), { recursive: true });
|
||||
writeFileSync(
|
||||
join(base, ".gsd", "runtime", "write-gate-state.json"),
|
||||
join(base, ".sf", "runtime", "write-gate-state.json"),
|
||||
JSON.stringify(
|
||||
{
|
||||
verifiedDepthMilestones: snapshot.verifiedDepthMilestones ?? [],
|
||||
|
|
@ -97,7 +97,7 @@ describe("workflow MCP tools", () => {
|
|||
assert.match(text, /Saved SUMMARY artifact/);
|
||||
assert.equal(process.cwd(), originalCwd, "workflow MCP tools should not mutate process.cwd");
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M001", "slices", "S01", "S01-SUMMARY.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M001", "slices", "S01", "S01-SUMMARY.md")),
|
||||
"summary file should exist on disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -178,9 +178,9 @@ describe("workflow MCP tools", () => {
|
|||
it("blocks workflow mutation tools while a discussion gate is pending", async () => {
|
||||
const base = makeTmpBase();
|
||||
try {
|
||||
mkdirSync(join(base, ".gsd", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
mkdirSync(join(base, ".sf", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
writeFileSync(
|
||||
join(base, ".gsd", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
join(base, ".sf", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
"# S01\n\n- [ ] **T01: Demo** `est:5m`\n",
|
||||
);
|
||||
writeWriteGateSnapshot(base, { pendingGateId: "depth_verification_M001_confirm" });
|
||||
|
|
@ -211,9 +211,9 @@ describe("workflow MCP tools", () => {
|
|||
it("blocks workflow mutation tools during queue mode", async () => {
|
||||
const base = makeTmpBase();
|
||||
try {
|
||||
mkdirSync(join(base, ".gsd", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
mkdirSync(join(base, ".sf", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
writeFileSync(
|
||||
join(base, ".gsd", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
join(base, ".sf", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
"# S01\n\n- [ ] **T01: Demo** `est:5m`\n",
|
||||
);
|
||||
writeWriteGateSnapshot(base, { activeQueuePhase: true });
|
||||
|
|
@ -244,9 +244,9 @@ describe("workflow MCP tools", () => {
|
|||
it("gsd_task_complete and gsd_milestone_status work end-to-end", async () => {
|
||||
const base = makeTmpBase();
|
||||
try {
|
||||
mkdirSync(join(base, ".gsd", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
mkdirSync(join(base, ".sf", "milestones", "M001", "slices", "S01"), { recursive: true });
|
||||
writeFileSync(
|
||||
join(base, ".gsd", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
join(base, ".sf", "milestones", "M001", "slices", "S01", "S01-PLAN.md"),
|
||||
"# S01\n\n- [ ] **T01: Demo** `est:5m`\n",
|
||||
);
|
||||
|
||||
|
|
@ -269,7 +269,7 @@ describe("workflow MCP tools", () => {
|
|||
|
||||
assert.match((taskResult as any).content[0].text as string, /Completed task T01/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M001", "slices", "S01", "tasks", "T01-SUMMARY.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M001", "slices", "S01", "tasks", "T01-SUMMARY.md")),
|
||||
"task summary should be written to disk",
|
||||
);
|
||||
|
||||
|
|
@ -289,9 +289,9 @@ describe("workflow MCP tools", () => {
|
|||
it("gsd_complete_task alias delegates to gsd_task_complete behavior", async () => {
|
||||
const base = makeTmpBase();
|
||||
try {
|
||||
mkdirSync(join(base, ".gsd", "milestones", "M002", "slices", "S02"), { recursive: true });
|
||||
mkdirSync(join(base, ".sf", "milestones", "M002", "slices", "S02"), { recursive: true });
|
||||
writeFileSync(
|
||||
join(base, ".gsd", "milestones", "M002", "slices", "S02", "S02-PLAN.md"),
|
||||
join(base, ".sf", "milestones", "M002", "slices", "S02", "S02-PLAN.md"),
|
||||
"# S02\n\n- [ ] **T02: Demo** `est:5m`\n",
|
||||
);
|
||||
|
||||
|
|
@ -312,7 +312,7 @@ describe("workflow MCP tools", () => {
|
|||
|
||||
assert.match((result as any).content[0].text as string, /Completed task T02/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M002", "slices", "S02", "tasks", "T02-SUMMARY.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M002", "slices", "S02", "tasks", "T02-SUMMARY.md")),
|
||||
"alias should write task summary to disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -372,11 +372,11 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
assert.match((sliceResult as any).content[0].text as string, /Planned slice S01/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M001", "slices", "S01", "S01-PLAN.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M001", "slices", "S01", "S01-PLAN.md")),
|
||||
"slice plan should exist on disk",
|
||||
);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M001", "slices", "S01", "tasks", "T01-PLAN.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M001", "slices", "S01", "tasks", "T01-PLAN.md")),
|
||||
"task plan should exist on disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -406,7 +406,7 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
|
||||
assert.match((result as any).content[0].text as string, /Saved requirement R\d+/);
|
||||
assert.ok(existsSync(join(base, ".gsd", "REQUIREMENTS.md")), "REQUIREMENTS.md should be written to disk");
|
||||
assert.ok(existsSync(join(base, ".sf", "REQUIREMENTS.md")), "REQUIREMENTS.md should be written to disk");
|
||||
const row = _getAdapter()!
|
||||
.prepare("SELECT id, class, description FROM requirements WHERE description = ?")
|
||||
.get("Inline MCP requirement save regression") as Record<string, unknown> | undefined;
|
||||
|
|
@ -486,7 +486,7 @@ describe("workflow MCP tools", () => {
|
|||
|
||||
assert.match((result as any).content[0].text as string, /Planned task T11/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M010", "slices", "S10", "tasks", "T11-PLAN.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M010", "slices", "S10", "tasks", "T11-PLAN.md")),
|
||||
"T11 plan should be written after reopening the DB",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -624,11 +624,11 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
assert.match((aliasResult as any).content[0].text as string, /Replanned slice S09/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M099", "slices", "S09", "S09-REPLAN.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M099", "slices", "S09", "S09-REPLAN.md")),
|
||||
"replan artifact should exist on disk",
|
||||
);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M099", "slices", "S09", "S09-PLAN.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M099", "slices", "S09", "S09-PLAN.md")),
|
||||
"updated plan should exist on disk",
|
||||
);
|
||||
const removedTask = _getAdapter()!.prepare(
|
||||
|
|
@ -776,11 +776,11 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
assert.match((aliasResult as any).content[0].text as string, /Completed slice S04/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M004", "slices", "S04", "S04-SUMMARY.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M004", "slices", "S04", "S04-SUMMARY.md")),
|
||||
"alias should write slice summary to disk",
|
||||
);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M004", "slices", "S04", "S04-UAT.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M004", "slices", "S04", "S04-UAT.md")),
|
||||
"alias should write slice UAT to disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -887,11 +887,11 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
assert.match((completionResult as any).content[0].text as string, /Completed milestone M005/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M005", "M005-VALIDATION.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M005", "M005-VALIDATION.md")),
|
||||
"validation artifact should exist on disk",
|
||||
);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M005", "M005-SUMMARY.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M005", "M005-SUMMARY.md")),
|
||||
"milestone summary should exist on disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
@ -1051,11 +1051,11 @@ describe("workflow MCP tools", () => {
|
|||
});
|
||||
assert.match((reassessAliasResult as any).content[0].text as string, /Reassessed roadmap for milestone M006 after S06/);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M006", "slices", "S06", "S06-ASSESSMENT.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M006", "slices", "S06", "S06-ASSESSMENT.md")),
|
||||
"assessment artifact should exist on disk",
|
||||
);
|
||||
assert.ok(
|
||||
existsSync(join(base, ".gsd", "milestones", "M006", "M006-ROADMAP.md")),
|
||||
existsSync(join(base, ".sf", "milestones", "M006", "M006-ROADMAP.md")),
|
||||
"roadmap artifact should exist on disk",
|
||||
);
|
||||
} finally {
|
||||
|
|
|
|||
|
|
@ -657,7 +657,7 @@ async function handleSaveGateResult(
|
|||
|
||||
async function ensureMilestoneDbRow(milestoneId: string): Promise<void> {
|
||||
try {
|
||||
const { insertMilestone } = await importLocalModule<any>("../../../src/resources/extensions/sf/gsd-db.js");
|
||||
const { insertMilestone } = await importLocalModule<any>("../../../src/resources/extensions/sf/sf-db.js");
|
||||
insertMilestone({ id: milestoneId, status: "queued" });
|
||||
} catch {
|
||||
// Ignore pre-existing rows or transient DB availability issues.
|
||||
|
|
@ -1249,7 +1249,7 @@ export function registerWorkflowTools(server: McpToolServer): void {
|
|||
const { projectDir, milestoneId, sliceId, reason } = parseWorkflowArgs(skipSliceSchema, args);
|
||||
await enforceWorkflowWriteGate("gsd_skip_slice", projectDir, milestoneId);
|
||||
await runSerializedWorkflowDbOperation(projectDir, async () => {
|
||||
const { getSlice, updateSliceStatus } = await importLocalModule<any>("../../../src/resources/extensions/sf/gsd-db.js");
|
||||
const { getSlice, updateSliceStatus } = await importLocalModule<any>("../../../src/resources/extensions/sf/sf-db.js");
|
||||
const { invalidateStateCache } = await importLocalModule<any>("../../../src/resources/extensions/sf/state.js");
|
||||
const { rebuildState } = await importLocalModule<any>("../../../src/resources/extensions/sf/doctor.js");
|
||||
const slice = getSlice(milestoneId, sliceId);
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import { readFileSync } from "node:fs";
|
|||
import { join } from "node:path";
|
||||
|
||||
/**
|
||||
* Regression #4251: `gsd -p --model <provider>/<id> "msg"` must never mutate
|
||||
* Regression #4251: `sf -p --model <provider>/<id> "msg"` must never mutate
|
||||
* the persisted defaultProvider/defaultModel in settings.json. The one-shot
|
||||
* print invocation used to verify a provider (e.g. Bearer-auth smoke test)
|
||||
* was silently overwriting the global default.
|
||||
|
|
@ -55,7 +55,7 @@ test("AgentSession stores persistModelChanges and defaults it to false (#4251)",
|
|||
);
|
||||
});
|
||||
|
||||
test("gsd src/cli.ts interactive branch opts into persistence (#4251)", () => {
|
||||
test("sf src/cli.ts interactive branch opts into persistence (#4251)", () => {
|
||||
const printGuardIdx = gsdCliSource.indexOf("if (isPrintMode)");
|
||||
// Interactive createAgentSession call lives after the print-mode branch.
|
||||
const interactiveCreateIdx = gsdCliSource.indexOf("createAgentSession({", printGuardIdx + 10);
|
||||
|
|
@ -107,7 +107,7 @@ test("CreateAgentSessionOptions forwards persistModelChanges to AgentSession (#4
|
|||
// assignment, now that the AgentSessionConfig default is false. The assertion
|
||||
// moved to the "main.ts sets persistModelChanges = isInteractive" test below.
|
||||
|
||||
test("gsd src/cli.ts print-mode createAgentSession passes persistModelChanges: false (#4251)", () => {
|
||||
test("sf src/cli.ts print-mode createAgentSession passes persistModelChanges: false (#4251)", () => {
|
||||
const printGuardIdx = gsdCliSource.indexOf("if (isPrintMode)");
|
||||
assert.ok(printGuardIdx >= 0, "missing isPrintMode branch in src/cli.ts");
|
||||
const createIdx = gsdCliSource.indexOf("createAgentSession({", printGuardIdx);
|
||||
|
|
@ -119,7 +119,7 @@ test("gsd src/cli.ts print-mode createAgentSession passes persistModelChanges: f
|
|||
);
|
||||
});
|
||||
|
||||
test("gsd src/cli.ts print-mode --model override calls setModel with persist: false (#4251)", () => {
|
||||
test("sf src/cli.ts print-mode --model override calls setModel with persist: false (#4251)", () => {
|
||||
const printGuardIdx = gsdCliSource.indexOf("if (isPrintMode)");
|
||||
const overrideIdx = gsdCliSource.indexOf("if (cliFlags.model)", printGuardIdx);
|
||||
assert.ok(overrideIdx >= 0, "missing --model override block in print-mode branch");
|
||||
|
|
@ -130,7 +130,7 @@ test("gsd src/cli.ts print-mode --model override calls setModel with persist: fa
|
|||
);
|
||||
});
|
||||
|
||||
test("gsd src/cli.ts print-mode skips validateConfiguredModel when --model is set (#4251)", () => {
|
||||
test("sf src/cli.ts print-mode skips validateConfiguredModel when --model is set (#4251)", () => {
|
||||
const printGuardIdx = gsdCliSource.indexOf("if (isPrintMode)");
|
||||
const validateIdx = gsdCliSource.indexOf("validateConfiguredModel(", printGuardIdx);
|
||||
assert.ok(validateIdx >= 0, "missing validateConfiguredModel call in print-mode branch");
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ export interface AgentSessionConfig {
|
|||
isClaudeCodeReady?: () => boolean;
|
||||
/** When false, model changes (via setModel/cycleModel/extension setModel) do NOT
|
||||
* write defaultProvider/defaultModel back to settings.json. Used by print/one-shot
|
||||
* mode so that `gsd -p --model X "msg"` never mutates the persisted default (#4251). */
|
||||
* mode so that `sf -p --model X "msg"` never mutates the persisted default (#4251). */
|
||||
persistModelChanges?: boolean;
|
||||
}
|
||||
|
||||
|
|
@ -307,7 +307,7 @@ export class AgentSession {
|
|||
// Defaults to false — callers must explicitly opt into persistence. This is the
|
||||
// safe default for SDK consumers: a third party building on @sf-run/pi-coding-agent
|
||||
// should not silently mutate the user's global settings just by switching models.
|
||||
// Interactive CLI entry points (gsd wrapper's interactive branch and pi main's
|
||||
// Interactive CLI entry points (sf wrapper's interactive branch and pi main's
|
||||
// isInteractive branch) explicitly set this to true so user model picks still
|
||||
// persist. One-shot/print/rpc/mcp leave it false. (#4251)
|
||||
private _persistModelChanges: boolean;
|
||||
|
|
|
|||
|
|
@ -156,9 +156,9 @@ describe("verifyRuntimeDependencies", () => {
|
|||
|
||||
it("includes appName and source in error for retry hint", () => {
|
||||
assert.throws(
|
||||
() => verifyRuntimeDependencies(["__missing__"], "github:user/repo", "gsd"),
|
||||
() => verifyRuntimeDependencies(["__missing__"], "github:user/repo", "sf"),
|
||||
(err: Error) => {
|
||||
assert.ok(err.message.includes("gsd"));
|
||||
assert.ok(err.message.includes("sf"));
|
||||
assert.ok(err.message.includes("github:user/repo"));
|
||||
return true;
|
||||
},
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@
|
|||
"fileTypes": [".ts", ".tsx", ".js", ".jsx", ".mjs", ".cjs"],
|
||||
"rootMarkers": ["package.json", "tsconfig.json", "jsconfig.json"],
|
||||
"initOptions": {
|
||||
"hostInfo": "gsd-coding-agent",
|
||||
"hostInfo": "sf-coding-agent",
|
||||
"preferences": {
|
||||
"includeInlayParameterNameHints": "all",
|
||||
"includeInlayVariableTypeHints": true,
|
||||
|
|
|
|||
|
|
@ -823,7 +823,7 @@ export class DefaultResourceLoader implements ResourceLoader {
|
|||
/**
|
||||
* Extract the extension name from its path.
|
||||
* For root-level files: basename without extension (e.g. "search-the-web.ts" → "search-the-web")
|
||||
* For subdirectory extensions: the directory name (e.g. "/path/to/gsd/index.ts" → "gsd")
|
||||
* For subdirectory extensions: the directory name (e.g. "/path/to/sf/index.ts" → "sf")
|
||||
*/
|
||||
private getExtensionNameFromPath(extPath: string): string {
|
||||
const base = basename(extPath);
|
||||
|
|
@ -840,8 +840,8 @@ export class DefaultResourceLoader implements ResourceLoader {
|
|||
|
||||
/**
|
||||
* Extract the extension directory name (key) from a full extension path.
|
||||
* Given extensionsDir `/home/user/.gsd/agent/extensions` and
|
||||
* ownerPath `/home/user/.gsd/agent/extensions/mcp-client/index.js`,
|
||||
* Given extensionsDir `/home/user/.sf/agent/extensions` and
|
||||
* ownerPath `/home/user/.sf/agent/extensions/mcp-client/index.js`,
|
||||
* returns `"mcp-client"`. Returns `undefined` when the path is not
|
||||
* under extensionsDir.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -101,7 +101,7 @@ export interface CreateAgentSessionOptions {
|
|||
isClaudeCodeReady?: () => boolean;
|
||||
/** When false, model changes do NOT write defaultProvider/defaultModel back to
|
||||
* settings.json. main.ts sets this to false for print/one-shot mode so
|
||||
* `gsd -p --model X "msg"` cannot mutate the persisted default (#4251). */
|
||||
* `sf -p --model X "msg"` cannot mutate the persisted default (#4251). */
|
||||
persistModelChanges?: boolean;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ describe("SessionManager usage totals", () => {
|
|||
});
|
||||
|
||||
it("tracks assistant usage incrementally without rescanning entries", () => {
|
||||
dir = mkdtempSync(join(tmpdir(), "gsd-session-manager-test-"));
|
||||
dir = mkdtempSync(join(tmpdir(), "sf-session-manager-test-"));
|
||||
const manager = SessionManager.create(dir, dir);
|
||||
|
||||
manager.appendMessage({ role: "user", content: [{ type: "text", text: "hello" }] } as any);
|
||||
|
|
@ -48,7 +48,7 @@ describe("SessionManager usage totals", () => {
|
|||
});
|
||||
|
||||
it("resets totals when starting a new session", () => {
|
||||
dir = mkdtempSync(join(tmpdir(), "gsd-session-manager-test-"));
|
||||
dir = mkdtempSync(join(tmpdir(), "sf-session-manager-test-"));
|
||||
const manager = SessionManager.create(dir, dir);
|
||||
manager.appendMessage(makeAssistantMessage(5, 5, 0, 0, 0.05));
|
||||
assert.equal(manager.getUsageTotals().input, 5);
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ export const ECOSYSTEM_SKILLS_DIR = join(homedir(), ".agents", "skills");
|
|||
export const ECOSYSTEM_PROJECT_SKILLS_DIR = ".agents";
|
||||
|
||||
/**
|
||||
* Legacy skills directory (~/.gsd/agent/skills/ or ~/.pi/agent/skills/).
|
||||
* Legacy skills directory (~/.sf/agent/skills/ or ~/.pi/agent/skills/).
|
||||
* Read as a fallback so existing installs don't lose skills before migration runs.
|
||||
*/
|
||||
const LEGACY_SKILLS_DIR = join(homedir(), CONFIG_DIR_NAME, "agent", "skills");
|
||||
|
|
@ -424,7 +424,7 @@ export function loadSkills(options: LoadSkillsOptions = {}): LoadSkillsResult {
|
|||
// Primary project: .agents/skills/ — standard project-level location
|
||||
addSkills(loadSkillsFromDirInternal(resolve(cwd, ECOSYSTEM_PROJECT_SKILLS_DIR, "skills"), "project", true));
|
||||
|
||||
// Legacy fallback: read skills from ~/.gsd/agent/skills/ so existing
|
||||
// Legacy fallback: read skills from ~/.sf/agent/skills/ so existing
|
||||
// installs keep working until the one-time migration in resource-loader
|
||||
// copies them to ~/.agents/skills/. Skip if migration has completed.
|
||||
const legacyMigrated = existsSync(join(LEGACY_SKILLS_DIR, ".migrated-to-agents"));
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ const coreDir = join(__dirname, "..");
|
|||
* it does not need the guard and should NOT appear here.
|
||||
*/
|
||||
const SPAWN_FILES_NEEDING_SHELL_GUARD = [
|
||||
// Extension's SF client — spawns the `gsd` binary which is a .cmd on Windows
|
||||
join(coreDir, "..", "..", "..", "vscode-extension", "src", "gsd-client.ts"),
|
||||
// Extension's SF client — spawns the `sf` binary which is a .cmd on Windows
|
||||
join(coreDir, "..", "..", "..", "vscode-extension", "src", "sf-client.ts"),
|
||||
// exec.ts — used by extensions to run arbitrary commands
|
||||
join(coreDir, "exec.ts"),
|
||||
// LSP index — spawns project-type commands (tsc, cargo, etc.)
|
||||
|
|
@ -86,7 +86,7 @@ test("all spawn sites that invoke user-facing binaries include shell: process.pl
|
|||
[],
|
||||
`The following spawn sites are missing 'shell: process.platform === "win32"':\n` +
|
||||
failures.map(f => ` - ${f}`).join("\n") +
|
||||
`\nOn Windows, .cmd wrapper scripts (npm, npx, tsc, gsd) require shell ` +
|
||||
`\nOn Windows, .cmd wrapper scripts (npm, npx, tsc, sf) require shell ` +
|
||||
`resolution. Without this guard, spawn fails with ENOENT or EINVAL.`,
|
||||
);
|
||||
});
|
||||
|
|
|
|||
|
|
@ -407,7 +407,7 @@ export async function main(args: string[]) {
|
|||
// Auto-detect: all models are local, enable offline mode
|
||||
process.env.PI_OFFLINE = "1";
|
||||
process.env.PI_SKIP_VERSION_CHECK = "1";
|
||||
console.log("[gsd] All configured models are local \u2014 enabling offline mode automatically.");
|
||||
console.log("[sf] All configured models are local \u2014 enabling offline mode automatically.");
|
||||
}
|
||||
|
||||
const resourceLoader = new DefaultResourceLoader({
|
||||
|
|
|
|||
|
|
@ -52,11 +52,11 @@ describe("ToolExecutionComponent", () => {
|
|||
const rendered = renderTool(
|
||||
"Bash",
|
||||
{ command: "pwd" },
|
||||
{ content: [{ type: "text", text: "/tmp/gsd-pr-fix" }], isError: false },
|
||||
{ content: [{ type: "text", text: "/tmp/sf-pr-fix" }], isError: false },
|
||||
);
|
||||
|
||||
assert.match(rendered, /\$ pwd/);
|
||||
assert.match(rendered, /\/tmp\/gsd-pr-fix/);
|
||||
assert.match(rendered, /\/tmp\/sf-pr-fix/);
|
||||
assert.doesNotMatch(rendered, /^\{\s*\}$/m);
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -97,13 +97,13 @@ test("input-controller: built-in slash commands stay in TUI dispatch", async ()
|
|||
});
|
||||
|
||||
test("input-controller: extension slash commands fall through to session.prompt", async () => {
|
||||
const { host, prompted, errors, history } = createHost({ knownSlashCommands: ["gsd"] });
|
||||
const { host, prompted, errors, history } = createHost({ knownSlashCommands: ["sf"] });
|
||||
|
||||
await host.defaultEditor.onSubmit("/gsd help");
|
||||
await host.defaultEditor.onSubmit("/sf help");
|
||||
|
||||
assert.deepEqual(prompted, ["/gsd help"], "known extension slash commands should reach session.prompt");
|
||||
assert.deepEqual(prompted, ["/sf help"], "known extension slash commands should reach session.prompt");
|
||||
assert.deepEqual(errors, [], "known extension slash commands should not show unknown-command errors");
|
||||
assert.deepEqual(history, ["/gsd help"], "known extension slash commands should still be added to history");
|
||||
assert.deepEqual(history, ["/sf help"], "known extension slash commands should still be added to history");
|
||||
});
|
||||
|
||||
test("input-controller: prompt template slash commands fall through to session.prompt", async () => {
|
||||
|
|
|
|||
|
|
@ -129,12 +129,12 @@ export function setupEditorSubmitHandler(host: InteractiveModeStateHost & {
|
|||
* Drag-and-drop inserts paths like "/Users/name/Desktop/file.png" which
|
||||
* should be treated as plain text input, not a /Users command.
|
||||
*
|
||||
* Heuristic: a slash command is a single token like "/help" or "/gsd auto".
|
||||
* Heuristic: a slash command is a single token like "/help" or "/sf auto".
|
||||
* File paths have a second "/" within the first token (e.g., "/Users/...").
|
||||
*/
|
||||
function looksLikeFilePath(text: string): boolean {
|
||||
const firstToken = text.split(/\s/)[0];
|
||||
// Slash commands: /help, /gsd, /commit — single "/" at start only.
|
||||
// Slash commands: /help, /sf, /commit — single "/" at start only.
|
||||
// File paths: /Users/name/file, /home/user/file, /tmp/x — contain "/" after position 0.
|
||||
return firstToken.indexOf("/", 1) !== -1;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import { tmpdir } from "node:os";
|
|||
import { MemoryStorage } from "./storage.js";
|
||||
|
||||
function makeTmpDir(): string {
|
||||
return mkdtempSync(join(tmpdir(), "gsd-memory-storage-test-"));
|
||||
return mkdtempSync(join(tmpdir(), "sf-memory-storage-test-"));
|
||||
}
|
||||
|
||||
function wait(ms: number): Promise<void> {
|
||||
|
|
|
|||
|
|
@ -32,10 +32,10 @@ test("toPosixPath: handles Windows UNC paths", () => {
|
|||
assert.equal(toPosixPath("\\\\server\\share\\dir"), "//server/share/dir");
|
||||
});
|
||||
|
||||
test("toPosixPath: handles .gsd/worktrees path on Windows", () => {
|
||||
test("toPosixPath: handles .sf/worktrees path on Windows", () => {
|
||||
assert.equal(
|
||||
toPosixPath("C:\\Users\\name\\project\\.gsd\\worktrees\\M001"),
|
||||
"C:/Users/name/project/.gsd/worktrees/M001",
|
||||
toPosixPath("C:\\Users\\name\\project\\.sf\\worktrees\\M001"),
|
||||
"C:/Users/name/project/.sf/worktrees/M001",
|
||||
);
|
||||
});
|
||||
|
||||
|
|
@ -74,7 +74,7 @@ const WINDOWS_ABS_PATH_RE = /[A-Z]:\\[A-Za-z]/;
|
|||
test("buildSystemPrompt: no Windows absolute paths with backslashes in output", () => {
|
||||
// Simulate a Windows-like cwd
|
||||
const prompt = buildSystemPrompt({
|
||||
cwd: "D:\\Projects\\my-app\\.gsd\\worktrees\\M002",
|
||||
cwd: "D:\\Projects\\my-app\\.sf\\worktrees\\M002",
|
||||
});
|
||||
const lines = prompt.split("\n");
|
||||
const violations = lines.filter(line => WINDOWS_ABS_PATH_RE.test(line));
|
||||
|
|
|
|||
202
packages/pi-coding-agent/src/utils/proxy-server.ts
Normal file
202
packages/pi-coding-agent/src/utils/proxy-server.ts
Normal file
|
|
@ -0,0 +1,202 @@
|
|||
import express from "express";
|
||||
import type { Server } from "http";
|
||||
import {
|
||||
getModels,
|
||||
stream,
|
||||
type Context,
|
||||
type Message,
|
||||
type Model,
|
||||
type StreamOptions,
|
||||
} from "@sf-run/pi-ai";
|
||||
import { AuthStorage } from "../core/auth-storage.js";
|
||||
import { ModelRegistry } from "../core/model-registry.js";
|
||||
|
||||
export type ProxyServerOptions = {
|
||||
port: number;
|
||||
authStorage: AuthStorage;
|
||||
modelRegistry: ModelRegistry;
|
||||
onLog?: (msg: string) => void;
|
||||
};
|
||||
|
||||
export class ProxyServer {
|
||||
private server: Server | null = null;
|
||||
|
||||
constructor(private options: ProxyServerOptions) {}
|
||||
|
||||
async start(): Promise<void> {
|
||||
if (this.server) return;
|
||||
|
||||
const app = express();
|
||||
app.use(express.json());
|
||||
|
||||
const { authStorage, modelRegistry, onLog } = this.options;
|
||||
|
||||
const log = (msg: string) => onLog?.(msg);
|
||||
|
||||
// 1. Model Listing
|
||||
app.get(["/v1/models", "/v1beta/models"], async (req, res) => {
|
||||
const providers = ["google", "google-gemini-cli", "google-vertex", "anthropic", "openai"];
|
||||
const allModels = providers.flatMap((p) => getModels(p as any));
|
||||
|
||||
const formatted = allModels.map((m) => ({
|
||||
id: m.id,
|
||||
object: "model",
|
||||
created: 1677610602,
|
||||
owned_by: m.provider,
|
||||
name: m.name,
|
||||
capabilities: m.capabilities,
|
||||
}));
|
||||
|
||||
if (req.path.startsWith("/v1beta")) {
|
||||
res.json({ models: formatted });
|
||||
} else {
|
||||
res.json({ data: formatted, object: "list" });
|
||||
}
|
||||
});
|
||||
|
||||
// 2. Chat Completions (OpenAI & GenAI)
|
||||
const handleChat = async (req: express.Request, res: express.Response) => {
|
||||
const body = req.body;
|
||||
const isOpenAi = req.path.includes("/v1/chat/completions");
|
||||
const modelId = isOpenAi ? body.model : req.params.modelId?.replace(/:streamGenerateContent$/, "");
|
||||
|
||||
if (!modelId) {
|
||||
return res.status(400).json({ error: "Model ID is required" });
|
||||
}
|
||||
|
||||
try {
|
||||
// Resolve model and provider
|
||||
const resolvedModel = modelRegistry.getModel(modelId);
|
||||
if (!resolvedModel) {
|
||||
return res.status(404).json({ error: `Model ${modelId} not found` });
|
||||
}
|
||||
|
||||
// Resolve API key
|
||||
const apiKey = await authStorage.getApiKey(resolvedModel.provider);
|
||||
if (!apiKey) {
|
||||
return res.status(401).json({ error: `No API key for provider ${resolvedModel.provider}. Use /login first.` });
|
||||
}
|
||||
|
||||
// Normalize messages
|
||||
const context: Context = isOpenAi
|
||||
? this.normalizeOpenAi(body)
|
||||
: this.normalizeGoogle(body);
|
||||
|
||||
const streamOptions: StreamOptions = {
|
||||
apiKey,
|
||||
temperature: body.temperature,
|
||||
maxTokens: isOpenAi ? body.max_tokens : body.generationConfig?.maxOutputTokens,
|
||||
};
|
||||
|
||||
const eventStream = stream(resolvedModel as any, context, streamOptions);
|
||||
|
||||
if (body.stream) {
|
||||
this.handleStreamingResponse(eventStream, res, isOpenAi, modelId);
|
||||
} else {
|
||||
await this.handleStaticResponse(eventStream, res, isOpenAi, modelId);
|
||||
}
|
||||
|
||||
} catch (err: any) {
|
||||
log(`Proxy error: ${err.message}`);
|
||||
res.status(500).json({ error: err.message });
|
||||
}
|
||||
};
|
||||
|
||||
app.post("/v1/chat/completions", handleChat);
|
||||
app.post("/v1beta/models/:modelId\\:streamGenerateContent", handleChat);
|
||||
|
||||
return new Promise((resolve) => {
|
||||
this.server = app.listen(this.options.port, () => {
|
||||
log(`Proxy Server running on http://localhost:${this.options.port}`);
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
stop(): void {
|
||||
if (this.server) {
|
||||
this.server.close();
|
||||
this.server = null;
|
||||
}
|
||||
}
|
||||
|
||||
private normalizeOpenAi(body: any): Context {
|
||||
const messages = body.messages || [];
|
||||
const system = messages.find((m: any) => m.role === "system")?.content;
|
||||
const history = messages.filter((m: any) => m.role !== "system").map((m: any) => ({
|
||||
role: m.role === "user" ? "user" : "assistant",
|
||||
content: typeof m.content === "string" ? [{ type: "text", text: m.content }] : m.content,
|
||||
}));
|
||||
return { messages: history, systemPrompt: system };
|
||||
}
|
||||
|
||||
private normalizeGoogle(body: any): Context {
|
||||
const contents = body.contents || [];
|
||||
const history = contents.map((c: any) => ({
|
||||
role: c.role === "user" ? "user" : "assistant",
|
||||
content: (c.parts || []).map((p: any) => ({ type: "text", text: p.text })),
|
||||
}));
|
||||
const system = body.systemInstruction?.parts?.[0]?.text;
|
||||
return { messages: history, systemPrompt: system };
|
||||
}
|
||||
|
||||
private handleStreamingResponse(eventStream: any, res: express.Response, isOpenAi: boolean, modelId: string) {
|
||||
res.setHeader("Content-Type", isOpenAi ? "text/event-stream" : "application/json");
|
||||
|
||||
eventStream.on("data", (ev: any) => {
|
||||
if (ev.type === "text_delta") {
|
||||
if (isOpenAi) {
|
||||
const chunk = {
|
||||
id: `chatcmpl-${Date.now()}`,
|
||||
object: "chat.completion.chunk",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
model: modelId,
|
||||
choices: [{ index: 0, delta: { content: ev.delta }, finish_reason: null }],
|
||||
};
|
||||
res.write(`data: ${JSON.stringify(chunk)}\n\n`);
|
||||
} else {
|
||||
const chunk = { candidates: [{ content: { parts: [{ text: ev.delta }] } }] };
|
||||
res.write(JSON.stringify(chunk) + "\n");
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
eventStream.on("done", () => {
|
||||
if (isOpenAi) res.write("data: [DONE]\n\n");
|
||||
res.end();
|
||||
});
|
||||
|
||||
eventStream.on("error", (ev: any) => {
|
||||
if (!res.headersSent) res.status(500).json({ error: ev.error.errorMessage });
|
||||
else res.end();
|
||||
});
|
||||
}
|
||||
|
||||
private async handleStaticResponse(eventStream: any, res: express.Response, isOpenAi: boolean, modelId: string) {
|
||||
let fullContent = "";
|
||||
eventStream.on("data", (ev: any) => {
|
||||
if (ev.type === "text_delta") fullContent += ev.delta;
|
||||
});
|
||||
|
||||
return new Promise<void>((resolve) => {
|
||||
eventStream.on("done", () => {
|
||||
if (isOpenAi) {
|
||||
res.json({
|
||||
id: `chatcmpl-${Date.now()}`,
|
||||
object: "chat.completion",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
model: modelId,
|
||||
choices: [{ index: 0, message: { role: "assistant", content: fullContent }, finish_reason: "stop" }],
|
||||
});
|
||||
} else {
|
||||
res.json({ candidates: [{ content: { parts: [{ text: fullContent }] } }] });
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
eventStream.on("error", (ev: any) => {
|
||||
res.status(500).json({ error: ev.error.errorMessage });
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
|
@ -260,7 +260,7 @@ describe("RpcClient construction", () => {
|
|||
|
||||
it("creates with custom options", () => {
|
||||
const client = new RpcClient({
|
||||
cliPath: "/usr/local/bin/gsd",
|
||||
cliPath: "/usr/local/bin/sf",
|
||||
cwd: "/tmp",
|
||||
env: { NODE_ENV: "test" },
|
||||
provider: "anthropic",
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ should_scan() {
|
|||
esac
|
||||
# Skip generated/vendor dirs
|
||||
case "$file" in
|
||||
node_modules/*|dist/*|coverage/*|.gsd/*)
|
||||
node_modules/*|dist/*|coverage/*|.sf/*)
|
||||
return 1 ;;
|
||||
esac
|
||||
return 0
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
* Rebuild the Next.js web host only when web source files are newer than the
|
||||
* staged standalone build. Skips the build when nothing has changed.
|
||||
*
|
||||
* Also self-heals a missing/incomplete web dependency install so `npm run gsd:web`
|
||||
* Also self-heals a missing/incomplete web dependency install so `npm run sf:web`
|
||||
* doesn't fail with bare `next` command-not-found errors.
|
||||
*
|
||||
* Exit codes:
|
||||
|
|
|
|||
|
|
@ -175,8 +175,8 @@ async function main() {
|
|||
const { existsSync } = await import('node:fs');
|
||||
const testDirsToClean = [
|
||||
[join(ROOT, 'dist-test', 'src', 'tests'), join(ROOT, 'src', 'tests')],
|
||||
[join(ROOT, 'dist-test', 'src', 'resources', 'extensions', 'gsd', 'tests'),
|
||||
join(ROOT, 'src', 'resources', 'extensions', 'gsd', 'tests')],
|
||||
[join(ROOT, 'dist-test', 'src', 'resources', 'extensions', 'sf', 'tests'),
|
||||
join(ROOT, 'src', 'resources', 'extensions', 'sf', 'tests')],
|
||||
];
|
||||
let staleCleaned = 0;
|
||||
for (const [distDir, srcDir] of testDirsToClean) {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import { fileURLToPath } from 'node:url'
|
|||
const __dirname = dirname(fileURLToPath(import.meta.url))
|
||||
const root = resolve(__dirname, '..')
|
||||
const srcLoaderPath = resolve(root, 'src', 'loader.ts')
|
||||
const resolveTsPath = resolve(root, 'src', 'resources', 'extensions', 'gsd', 'tests', 'resolve-ts.mjs')
|
||||
const resolveTsPath = resolve(root, 'src', 'resources', 'extensions', 'sf', 'tests', 'resolve-ts.mjs')
|
||||
|
||||
const child = spawn(
|
||||
process.execPath,
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
* .js files still import '../foo.ts'. This hook redirects those to '.js' so
|
||||
* Node can find the compiled output.
|
||||
*
|
||||
* Also redirects @gsd bare imports to their compiled counterparts in dist-test.
|
||||
* Also redirects @sf bare imports to their compiled counterparts in dist-test.
|
||||
*/
|
||||
|
||||
import { fileURLToPath, pathToFileURL } from 'node:url';
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import { execFileSync } from 'node:child_process';
|
|||
import { chmodSync, existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
|
||||
const MARKER = '# gsd-secret-scan';
|
||||
const MARKER = '# sf-secret-scan';
|
||||
|
||||
function git(args) {
|
||||
return execFileSync('git', args, {
|
||||
|
|
@ -36,7 +36,7 @@ if (existsSync(hookFile)) {
|
|||
|
||||
const hookBody = [
|
||||
'#!/usr/bin/env sh',
|
||||
'# gsd-secret-scan',
|
||||
'# sf-secret-scan',
|
||||
'# Pre-commit hook: scan staged files for hardcoded secrets',
|
||||
hookCommand,
|
||||
'',
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ set -euo pipefail
|
|||
|
||||
HOOK_DIR="$(git rev-parse --git-dir)/hooks"
|
||||
HOOK_FILE="$HOOK_DIR/pre-commit"
|
||||
MARKER="# gsd-secret-scan"
|
||||
MARKER="# sf-secret-scan"
|
||||
|
||||
mkdir -p "$HOOK_DIR"
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ if [[ -f "$HOOK_FILE" ]]; then
|
|||
else
|
||||
cat > "$HOOK_FILE" << 'EOF'
|
||||
#!/usr/bin/env bash
|
||||
# gsd-secret-scan
|
||||
# sf-secret-scan
|
||||
# Pre-commit hook: scan staged files for hardcoded secrets
|
||||
bash "$(git rev-parse --show-toplevel)/scripts/secret-scan.sh"
|
||||
EOF
|
||||
|
|
|
|||
|
|
@ -18,15 +18,15 @@
|
|||
* --heal Auto-respawn dead workers (opt-in, off by default)
|
||||
* --heal-retries <n> Max respawn attempts per worker (default: 3)
|
||||
* --heal-cooldown <sec> Seconds between respawn attempts (default: 30)
|
||||
* --dir <path> Status file directory (default: .gsd/parallel)
|
||||
* --dir <path> Status file directory (default: .sf/parallel)
|
||||
* --root <path> Project root (default: cwd)
|
||||
*
|
||||
* Data sources:
|
||||
* .gsd/parallel/M0xx.status.json — heartbeat, cost, state (written by orchestrator)
|
||||
* .gsd/worktrees/M0xx/.gsd/auto.lock — current unit type + ID (written by worker)
|
||||
* .gsd/worktrees/M0xx/.gsd/gsd.db — task/slice completion (SQLite, queried via cli)
|
||||
* .gsd/parallel/M0xx.stdout.log — NDJSON events (cost extraction, notify messages)
|
||||
* .gsd/parallel/M0xx.stderr.log — error surfacing
|
||||
* .sf/parallel/M0xx.status.json — heartbeat, cost, state (written by orchestrator)
|
||||
* .sf/worktrees/M0xx/.sf/auto.lock — current unit type + ID (written by worker)
|
||||
* .sf/worktrees/M0xx/.sf/sf.db — task/slice completion (SQLite, queried via cli)
|
||||
* .sf/parallel/M0xx.stdout.log — NDJSON events (cost extraction, notify messages)
|
||||
* .sf/parallel/M0xx.stderr.log — error surfacing
|
||||
*
|
||||
* Health indicators:
|
||||
* ● green — PID alive, fresh heartbeat (<30s)
|
||||
|
|
@ -48,7 +48,7 @@ import { execSync, spawn, spawnSync } from 'node:child_process';
|
|||
|
||||
const args = process.argv.slice(2);
|
||||
const INTERVAL_SEC = parseInt(getArg('--interval', '5'), 10);
|
||||
const PARALLEL_DIR = getArg('--dir', '.gsd/parallel');
|
||||
const PARALLEL_DIR = getArg('--dir', '.sf/parallel');
|
||||
const PROJECT_ROOT = getArg('--root', process.cwd());
|
||||
const ONE_SHOT = args.includes('--once');
|
||||
const HEAL_MODE = args.includes('--heal');
|
||||
|
|
@ -122,7 +122,7 @@ function isPidAlive(pid) {
|
|||
|
||||
function discoverWorkers() {
|
||||
const dir = path.resolve(PROJECT_ROOT, PARALLEL_DIR);
|
||||
const worktreeDir = path.resolve(PROJECT_ROOT, '.gsd/worktrees');
|
||||
const worktreeDir = path.resolve(PROJECT_ROOT, '.sf/worktrees');
|
||||
const mids = new Set();
|
||||
|
||||
// From status files
|
||||
|
|
@ -143,7 +143,7 @@ function discoverWorkers() {
|
|||
// From worktree directories that have auto.lock (actively running)
|
||||
if (fs.existsSync(worktreeDir)) {
|
||||
for (const d of fs.readdirSync(worktreeDir)) {
|
||||
if (d.startsWith('M') && fs.existsSync(path.join(worktreeDir, d, '.gsd', 'auto.lock'))) {
|
||||
if (d.startsWith('M') && fs.existsSync(path.join(worktreeDir, d, '.sf', 'auto.lock'))) {
|
||||
mids.add(d);
|
||||
}
|
||||
}
|
||||
|
|
@ -158,12 +158,12 @@ function readWorkerStatus(mid) {
|
|||
}
|
||||
|
||||
function readAutoLock(mid) {
|
||||
const lockPath = path.resolve(PROJECT_ROOT, `.gsd/worktrees/${mid}/.gsd/auto.lock`);
|
||||
const lockPath = path.resolve(PROJECT_ROOT, `.sf/worktrees/${mid}/.sf/auto.lock`);
|
||||
return readJsonSafe(lockPath);
|
||||
}
|
||||
|
||||
function querySliceProgress(mid) {
|
||||
const dbPath = path.resolve(PROJECT_ROOT, `.gsd/worktrees/${mid}/.gsd/gsd.db`);
|
||||
const dbPath = path.resolve(PROJECT_ROOT, `.sf/worktrees/${mid}/.sf/sf.db`);
|
||||
if (!fs.existsSync(dbPath)) return [];
|
||||
|
||||
try {
|
||||
|
|
@ -276,7 +276,7 @@ function extractCostFromNdjson(mid) {
|
|||
|
||||
// Auto-detect the SF loader path — works across npm global, homebrew, and local installs
|
||||
function findGsdLoader() {
|
||||
// 1. Check if we're running from inside the gsd-2 repo itself
|
||||
// 1. Check if we're running from inside the sf-2 repo itself
|
||||
const repoLoader = path.resolve(import.meta.dirname, '..', 'dist', 'loader.js');
|
||||
if (fs.existsSync(repoLoader)) return repoLoader;
|
||||
|
||||
|
|
@ -285,17 +285,17 @@ function findGsdLoader() {
|
|||
const globalRoot = execSync('npm root -g', { encoding: 'utf-8', timeout: 3000 }).trim();
|
||||
const candidates = [
|
||||
path.join(globalRoot, 'sf-run', 'dist', 'loader.js'),
|
||||
path.join(globalRoot, '@gsd', 'pi', 'dist', 'loader.js'),
|
||||
path.join(globalRoot, '@sf', 'pi', 'dist', 'loader.js'),
|
||||
];
|
||||
for (const c of candidates) {
|
||||
if (fs.existsSync(c)) return c;
|
||||
}
|
||||
} catch { /* skip */ }
|
||||
|
||||
// 3. Try `which gsd` and resolve symlink
|
||||
// 3. Try `which sf` and resolve symlink
|
||||
try {
|
||||
const pathLookup = process.platform === 'win32' ? 'where.exe' : 'which';
|
||||
const lookupArgs = ['gsd'];
|
||||
const lookupArgs = ['sf'];
|
||||
const result = spawnSync(pathLookup, lookupArgs, { encoding: 'utf-8', timeout: 3000 });
|
||||
const bin = result.status === 0 ? result.stdout.trim().split(/\r?\n/)[0]?.trim() : '';
|
||||
if (bin) {
|
||||
|
|
@ -315,7 +315,7 @@ const SF_LOADER = findGsdLoader();
|
|||
* Uses a detached Node child with log file descriptors so the child is fully detached.
|
||||
*/
|
||||
function respawnWorker(mid) {
|
||||
const worktreeDir = path.resolve(PROJECT_ROOT, `.gsd/worktrees/${mid}`);
|
||||
const worktreeDir = path.resolve(PROJECT_ROOT, `.sf/worktrees/${mid}`);
|
||||
if (!fs.existsSync(worktreeDir)) return null;
|
||||
if (!fs.existsSync(SF_LOADER)) return null;
|
||||
|
||||
|
|
@ -517,7 +517,7 @@ function truncate(str, maxLen) {
|
|||
* Get recently completed tasks/slices from the worktree DB for the event feed.
|
||||
*/
|
||||
function queryRecentCompletions(mid) {
|
||||
const dbPath = path.resolve(PROJECT_ROOT, `.gsd/worktrees/${mid}/.gsd/gsd.db`);
|
||||
const dbPath = path.resolve(PROJECT_ROOT, `.sf/worktrees/${mid}/.sf/sf.db`);
|
||||
if (!fs.existsSync(dbPath)) return [];
|
||||
|
||||
try {
|
||||
|
|
@ -653,7 +653,7 @@ function render(workers) {
|
|||
if (workers.length === 0) {
|
||||
buf.push('');
|
||||
buf.push(` ${FG.yellow}No workers found in ${PARALLEL_DIR}/${RESET}`);
|
||||
buf.push(` ${DIM}Waiting for .gsd/parallel/*.status.json files...${RESET}`);
|
||||
buf.push(` ${DIM}Waiting for .sf/parallel/*.status.json files...${RESET}`);
|
||||
} else {
|
||||
for (const wk of workers) {
|
||||
buf.push('');
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ const RTK_SKIP =
|
|||
const RTK_VERSION = '0.33.1'
|
||||
const RTK_REPO = 'rtk-ai/rtk'
|
||||
const RTK_ENV = { ...process.env, RTK_TELEMETRY_DISABLED: '1' }
|
||||
const managedBinDir = join(process.env.SF_HOME || process.env.GSD_HOME || join(homedir(), '.gsd'), 'agent', 'bin')
|
||||
const managedBinDir = join(process.env.SF_HOME || process.env.GSD_HOME || join(homedir(), '.sf'), 'agent', 'bin')
|
||||
const managedBinaryPath = join(managedBinDir, platform() === 'win32' ? 'rtk.exe' : 'rtk')
|
||||
|
||||
function run(cmd) {
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ function normalizePath(filePath) {
|
|||
* Check if a changed file matches a map entry pattern.
|
||||
* Supports:
|
||||
* - Exact suffix match: src/cli.ts matches src/cli.ts
|
||||
* - Glob prefix match: gsd/auto/* matches gsd/auto/anything.ts
|
||||
* - Glob prefix match: sf/auto/* matches sf/auto/anything.ts
|
||||
* - Wildcard extension: *.tsx matches any .tsx
|
||||
*/
|
||||
function fileMatchesPattern(filePath, pattern) {
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
# recover-gsd-1364.ps1 - Recovery script for issue #1364 (Windows)
|
||||
# recover-sf-1364.ps1 - Recovery script for issue #1364 (Windows)
|
||||
#
|
||||
# CRITICAL DATA-LOSS BUG: SF versions 2.30.0-2.35.x unconditionally added
|
||||
# ".gsd" to .gitignore via ensureGitignore(), causing git to report all
|
||||
# tracked .gsd/ files as deleted. Fixed in v2.36.0 (PR #1367).
|
||||
# ".sf" to .gitignore via ensureGitignore(), causing git to report all
|
||||
# tracked .sf/ files as deleted. Fixed in v2.36.0 (PR #1367).
|
||||
#
|
||||
# This script:
|
||||
# 1. Detects whether the repo was affected
|
||||
# 2. Finds the last clean commit before the damage
|
||||
# 3. Restores all deleted .gsd/ files from that commit
|
||||
# 4. Removes the bad ".gsd" line from .gitignore (if .gsd/ is tracked)
|
||||
# 3. Restores all deleted .sf/ files from that commit
|
||||
# 4. Removes the bad ".sf" line from .gitignore (if .sf/ is tracked)
|
||||
# 5. Prints a ready-to-commit summary
|
||||
#
|
||||
# Usage:
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-gsd-1364.ps1 [-DryRun]
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-sf-1364.ps1 [-DryRun]
|
||||
#
|
||||
# Options:
|
||||
# -DryRun Show what would be done without making any changes
|
||||
|
|
@ -66,7 +66,7 @@ function Invoke-GitOrDryRun {
|
|||
}
|
||||
|
||||
# Check whether a path is a symlink OR a junction (Windows uses junctions for
|
||||
# the .gsd external-state migration via symlinkSync(..., "junction"))
|
||||
# the .sf external-state migration via symlinkSync(..., "junction"))
|
||||
function Test-ReparsePoint {
|
||||
param([string]$Path)
|
||||
if (-not (Test-Path $Path)) { return $false }
|
||||
|
|
@ -99,30 +99,30 @@ if ($DryRun) {
|
|||
Write-Warn "DRY-RUN mode — no changes will be made."
|
||||
}
|
||||
|
||||
# ── Step 1: Detect .gsd/ ─────────────────────────────────────────────────────
|
||||
# ── Step 1: Detect .sf/ ─────────────────────────────────────────────────────
|
||||
|
||||
Write-Section "── Step 1: Detect .gsd/ directory ─────────────────────────────────"
|
||||
Write-Section "── Step 1: Detect .sf/ directory ─────────────────────────────────"
|
||||
|
||||
$sfDir = Join-Path $repoRoot '.gsd'
|
||||
$sfDir = Join-Path $repoRoot '.sf'
|
||||
$GsdIsSymlink = $false
|
||||
|
||||
if (-not (Test-Path $sfDir)) {
|
||||
Write-Ok ".gsd/ does not exist in this repo — not affected."
|
||||
Write-Ok ".sf/ does not exist in this repo — not affected."
|
||||
exit 0
|
||||
}
|
||||
|
||||
if (Test-ReparsePoint $sfDir) {
|
||||
# Scenario C: migration succeeded (symlink/junction in place) but git index was never
|
||||
# cleaned — tracked .gsd/* files still appear as deleted through the reparse point.
|
||||
# cleaned — tracked .sf/* files still appear as deleted through the reparse point.
|
||||
$GsdIsSymlink = $true
|
||||
Write-Warn ".gsd/ is a symlink/junction — checking for stale git index entries (Scenario C)..."
|
||||
Write-Warn ".sf/ is a symlink/junction — checking for stale git index entries (Scenario C)..."
|
||||
} else {
|
||||
Write-Info ".gsd/ is a real directory (Scenario A/B)."
|
||||
Write-Info ".sf/ is a real directory (Scenario A/B)."
|
||||
}
|
||||
|
||||
# ── Step 2: Check .gitignore for .gsd entry ──────────────────────────────────
|
||||
# ── Step 2: Check .gitignore for .sf entry ──────────────────────────────────
|
||||
|
||||
Write-Section "── Step 2: Check .gitignore for .gsd entry ─────────────────────────"
|
||||
Write-Section "── Step 2: Check .gitignore for .sf entry ─────────────────────────"
|
||||
|
||||
$gitignorePath = Join-Path $repoRoot '.gitignore'
|
||||
|
||||
|
|
@ -137,36 +137,36 @@ if (Test-Path $gitignorePath) {
|
|||
$gitignoreLines = Get-Content $gitignorePath -Encoding UTF8
|
||||
$gsdIgnoreLine = $gitignoreLines | Where-Object {
|
||||
$trimmed = $_.Trim()
|
||||
$trimmed -eq '.gsd' -and -not $trimmed.StartsWith('#')
|
||||
$trimmed -eq '.sf' -and -not $trimmed.StartsWith('#')
|
||||
} | Select-Object -First 1
|
||||
}
|
||||
|
||||
if ($GsdIsSymlink) {
|
||||
# Symlink layout: .gsd SHOULD be ignored (it's external state).
|
||||
# Symlink layout: .sf SHOULD be ignored (it's external state).
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Warn '".gsd" missing from .gitignore — will add (migration complete, .gsd/ is external).'
|
||||
Write-Warn '".sf" missing from .gitignore — will add (migration complete, .sf/ is external).'
|
||||
} else {
|
||||
Write-Ok '".gsd" already in .gitignore — correct for external-state layout.'
|
||||
Write-Ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
}
|
||||
} else {
|
||||
# Real-directory layout: .gsd should NOT be ignored.
|
||||
# Real-directory layout: .sf should NOT be ignored.
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Ok '".gsd" not found in .gitignore — .gitignore not affected.'
|
||||
Write-Ok '".sf" not found in .gitignore — .gitignore not affected.'
|
||||
} else {
|
||||
Write-Warn '".gsd" found in .gitignore — this is the bad pattern from #1364.'
|
||||
Write-Warn '".sf" found in .gitignore — this is the bad pattern from #1364.'
|
||||
}
|
||||
}
|
||||
|
||||
# ── Step 3: Find deleted .gsd/ files ─────────────────────────────────────────
|
||||
# ── Step 3: Find deleted .sf/ files ─────────────────────────────────────────
|
||||
|
||||
Write-Section "── Step 3: Find deleted .gsd/ files ───────────────────────────────"
|
||||
Write-Section "── Step 3: Find deleted .sf/ files ───────────────────────────────"
|
||||
|
||||
# Files deleted in working tree (tracked but missing)
|
||||
$deletedRaw = Invoke-Git @('ls-files', '--deleted', '--', '.gsd/*') -AllowFailure
|
||||
$deletedRaw = Invoke-Git @('ls-files', '--deleted', '--', '.sf/*') -AllowFailure
|
||||
$deletedFiles = if ($deletedRaw) { $deletedRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
|
||||
# Files tracked in HEAD right now
|
||||
$trackedInHeadRaw = Invoke-Git @('ls-tree', '-r', '--name-only', 'HEAD', '--', '.gsd/') -AllowFailure
|
||||
$trackedInHeadRaw = Invoke-Git @('ls-tree', '-r', '--name-only', 'HEAD', '--', '.sf/') -AllowFailure
|
||||
$trackedInHead = if ($trackedInHeadRaw) { $trackedInHeadRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
|
||||
$deletedFromHistory = @()
|
||||
|
|
@ -176,34 +176,34 @@ if ($GsdIsSymlink) {
|
|||
if ($trackedInHead.Count -eq 0 -and $deletedFiles.Count -eq 0) {
|
||||
Write-Ok "No stale index entries found — symlink/junction layout is healthy."
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Info "Add .gsd to .gitignore manually to complete the migration."
|
||||
Write-Info "Add .sf to .gitignore manually to complete the migration."
|
||||
}
|
||||
exit 0
|
||||
}
|
||||
$indexCount = if ($trackedInHead.Count -gt 0) { $trackedInHead.Count } else { $deletedFiles.Count }
|
||||
Write-Warn "Scenario C: $indexCount .gsd/ file(s) tracked in git index but inaccessible through reparse point."
|
||||
Write-Warn "Scenario C: $indexCount .sf/ file(s) tracked in git index but inaccessible through reparse point."
|
||||
Write-Info "Files are safe in external storage — only the git index needs cleaning."
|
||||
} else {
|
||||
# Files deleted in committed history (post-commit damage scenario — Scenario B)
|
||||
$deletedHistoryRaw = Invoke-Git @('log', '--all', '--diff-filter=D', '--name-only', '--format=', '--', '.gsd/*') -AllowFailure
|
||||
$deletedHistoryRaw = Invoke-Git @('log', '--all', '--diff-filter=D', '--name-only', '--format=', '--', '.sf/*') -AllowFailure
|
||||
$deletedFromHistory = if ($deletedHistoryRaw) {
|
||||
$deletedHistoryRaw -split "`n" | Where-Object { $_ -match '^\.gsd' } | Sort-Object -Unique
|
||||
$deletedHistoryRaw -split "`n" | Where-Object { $_ -match '^\.sf' } | Sort-Object -Unique
|
||||
} else { @() }
|
||||
|
||||
# Nothing was ever tracked in any scenario
|
||||
if ($trackedInHead.Count -eq 0 -and $deletedFiles.Count -eq 0 -and $deletedFromHistory.Count -eq 0) {
|
||||
Write-Ok "No .gsd/ files tracked in this repo — not affected by #1364."
|
||||
Write-Ok "No .sf/ files tracked in this repo — not affected by #1364."
|
||||
if ($gsdIgnoreLine) {
|
||||
Write-Warn '".gsd" is still in .gitignore but there is nothing to restore.'
|
||||
Write-Warn '".sf" is still in .gitignore but there is nothing to restore.'
|
||||
}
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Determine scenario
|
||||
if ($trackedInHead.Count -gt 0) {
|
||||
Write-Info "Scenario A: $($trackedInHead.Count) .gsd/ files still tracked in HEAD."
|
||||
Write-Info "Scenario A: $($trackedInHead.Count) .sf/ files still tracked in HEAD."
|
||||
} elseif ($deletedFromHistory.Count -gt 0) {
|
||||
Write-Warn "Scenario B: $($deletedFromHistory.Count) .gsd/ file(s) were tracked but deleted in a committed change:"
|
||||
Write-Warn "Scenario B: $($deletedFromHistory.Count) .sf/ file(s) were tracked but deleted in a committed change:"
|
||||
$deletedFromHistory | Select-Object -First 20 | ForEach-Object { Write-Host " - $_" }
|
||||
if ($deletedFromHistory.Count -gt 20) {
|
||||
Write-Host " ... and $($deletedFromHistory.Count - 20) more"
|
||||
|
|
@ -211,7 +211,7 @@ if ($GsdIsSymlink) {
|
|||
}
|
||||
|
||||
if ($deletedFiles.Count -gt 0) {
|
||||
Write-Warn "$($deletedFiles.Count) .gsd/ file(s) are missing from working tree (tracked but deleted/gitignored):"
|
||||
Write-Warn "$($deletedFiles.Count) .sf/ file(s) are missing from working tree (tracked but deleted/gitignored):"
|
||||
$deletedFiles | Select-Object -First 20 | ForEach-Object { Write-Host " - $_" }
|
||||
if ($deletedFiles.Count -gt 20) {
|
||||
Write-Host " ... and $($deletedFiles.Count - 20) more"
|
||||
|
|
@ -221,10 +221,10 @@ if ($GsdIsSymlink) {
|
|||
# HEAD has files and working tree is clean — only .gitignore needs fixing
|
||||
if ($trackedInHead.Count -gt 0 -and $deletedFiles.Count -eq 0) {
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Ok "No action needed — .gsd/ is tracked in HEAD and .gitignore is clean."
|
||||
Write-Ok "No action needed — .sf/ is tracked in HEAD and .gitignore is clean."
|
||||
exit 0
|
||||
}
|
||||
Write-Info ".gsd/ is tracked in HEAD and working tree is clean — only .gitignore needs fixing."
|
||||
Write-Info ".sf/ is tracked in HEAD and working tree is clean — only .gitignore needs fixing."
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -239,24 +239,24 @@ $restorableFiles = @()
|
|||
if ($GsdIsSymlink) {
|
||||
Write-Info "Scenario C: symlink/junction layout — skipping commit history scan (no file restore needed)."
|
||||
} else {
|
||||
Write-Info "Scanning git log to find when .gsd was added to .gitignore..."
|
||||
Write-Info "Scanning git log to find when .sf was added to .gitignore..."
|
||||
|
||||
# Strategy 1: find first commit that added ".gsd" to .gitignore
|
||||
# Strategy 1: find first commit that added ".sf" to .gitignore
|
||||
$gitignoreCommits = Invoke-Git @('log', '--format=%H', '--', '.gitignore') -AllowFailure
|
||||
if ($gitignoreCommits) {
|
||||
foreach ($sha in ($gitignoreCommits -split "`n" | Where-Object { $_ })) {
|
||||
$content = Invoke-Git @('show', "${sha}:.gitignore") -AllowFailure
|
||||
if ($content -and ($content -split "`n" | Where-Object { $_.Trim() -eq '.gsd' })) {
|
||||
if ($content -and ($content -split "`n" | Where-Object { $_.Trim() -eq '.sf' })) {
|
||||
$damageCommit = $sha
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Strategy 2: find commit that deleted .gsd/ files
|
||||
# Strategy 2: find commit that deleted .sf/ files
|
||||
if (-not $damageCommit -and $deletedFromHistory.Count -gt 0) {
|
||||
Write-Info "Searching for the commit that deleted .gsd/ files from the index..."
|
||||
$deleteCommits = Invoke-Git @('log', '--all', '--diff-filter=D', '--format=%H', '--', '.gsd/*') -AllowFailure
|
||||
Write-Info "Searching for the commit that deleted .sf/ files from the index..."
|
||||
$deleteCommits = Invoke-Git @('log', '--all', '--diff-filter=D', '--format=%H', '--', '.sf/*') -AllowFailure
|
||||
if ($deleteCommits) {
|
||||
$damageCommit = ($deleteCommits -split "`n" | Where-Object { $_ } | Select-Object -First 1)
|
||||
}
|
||||
|
|
@ -274,15 +274,15 @@ if ($GsdIsSymlink) {
|
|||
Write-Info "Restoring from: $cleanCommit — $cleanMsg"
|
||||
}
|
||||
|
||||
# Verify restore point has .gsd/ files
|
||||
$restorable = Invoke-Git @('ls-tree', '-r', '--name-only', $cleanCommit, '--', '.gsd/') -AllowFailure
|
||||
# Verify restore point has .sf/ files
|
||||
$restorable = Invoke-Git @('ls-tree', '-r', '--name-only', $cleanCommit, '--', '.sf/') -AllowFailure
|
||||
$restorableFiles = if ($restorable) { $restorable -split "`n" | Where-Object { $_ } } else { @() }
|
||||
|
||||
if ($restorableFiles.Count -eq 0) {
|
||||
Exit-Fatal "No .gsd/ files found in restore point $cleanCommit — cannot recover. Check git log manually."
|
||||
Exit-Fatal "No .sf/ files found in restore point $cleanCommit — cannot recover. Check git log manually."
|
||||
}
|
||||
|
||||
Write-Ok "Restore point has $($restorableFiles.Count) .gsd/ files available."
|
||||
Write-Ok "Restore point has $($restorableFiles.Count) .sf/ files available."
|
||||
}
|
||||
|
||||
# ── Step 5: Clean index (Scenario C) or restore deleted files (Scenario A/B) ─
|
||||
|
|
@ -290,34 +290,34 @@ if ($GsdIsSymlink) {
|
|||
if ($GsdIsSymlink) {
|
||||
Write-Section "── Step 5: Clean stale git index entries ───────────────────────────"
|
||||
|
||||
Write-Info "Running: git rm -r --cached --ignore-unmatch .gsd/ ..."
|
||||
Invoke-GitOrDryRun -GitArgs @('rm', '-r', '--cached', '--ignore-unmatch', '.gsd') -Display "rm -r --cached --ignore-unmatch .gsd"
|
||||
Write-Info "Running: git rm -r --cached --ignore-unmatch .sf/ ..."
|
||||
Invoke-GitOrDryRun -GitArgs @('rm', '-r', '--cached', '--ignore-unmatch', '.sf') -Display "rm -r --cached --ignore-unmatch .sf"
|
||||
|
||||
if (-not $DryRun) {
|
||||
$stillStaleRaw = Invoke-Git @('ls-files', '--deleted', '--', '.gsd/*') -AllowFailure
|
||||
$stillStaleRaw = Invoke-Git @('ls-files', '--deleted', '--', '.sf/*') -AllowFailure
|
||||
$stillStale = if ($stillStaleRaw) { $stillStaleRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
if ($stillStale.Count -eq 0) {
|
||||
Write-Ok "Git index cleaned — no stale .gsd/ entries remain."
|
||||
Write-Ok "Git index cleaned — no stale .sf/ entries remain."
|
||||
} else {
|
||||
Write-Warn "$($stillStale.Count) stale entr(ies) still present — may need manual cleanup."
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Write-Section "── Step 5: Restore deleted .gsd/ files ────────────────────────────"
|
||||
Write-Section "── Step 5: Restore deleted .sf/ files ────────────────────────────"
|
||||
|
||||
$needsRestore = ($deletedFiles.Count -gt 0) -or ($deletedFromHistory.Count -gt 0 -and $trackedInHead.Count -eq 0)
|
||||
|
||||
if (-not $needsRestore) {
|
||||
Write-Ok "No deleted files to restore — skipping."
|
||||
} else {
|
||||
Write-Info "Restoring .gsd/ files from $cleanCommit..."
|
||||
Invoke-GitOrDryRun -GitArgs @('checkout', $cleanCommit, '--', '.gsd/') -Display "checkout $cleanCommit -- .gsd/"
|
||||
Write-Info "Restoring .sf/ files from $cleanCommit..."
|
||||
Invoke-GitOrDryRun -GitArgs @('checkout', $cleanCommit, '--', '.sf/') -Display "checkout $cleanCommit -- .sf/"
|
||||
|
||||
if (-not $DryRun) {
|
||||
$stillMissingRaw = Invoke-Git @('ls-files', '--deleted', '--', '.gsd/*') -AllowFailure
|
||||
$stillMissingRaw = Invoke-Git @('ls-files', '--deleted', '--', '.sf/*') -AllowFailure
|
||||
$stillMissing = if ($stillMissingRaw) { $stillMissingRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
if ($stillMissing.Count -eq 0) {
|
||||
Write-Ok "All .gsd/ files restored successfully."
|
||||
Write-Ok "All .sf/ files restored successfully."
|
||||
} else {
|
||||
Write-Warn "$($stillMissing.Count) file(s) still missing after restore — may need manual recovery:"
|
||||
$stillMissing | Select-Object -First 10 | ForEach-Object { Write-Host " - $_" }
|
||||
|
|
@ -331,34 +331,34 @@ if ($GsdIsSymlink) {
|
|||
Write-Section "── Step 6: Fix .gitignore ──────────────────────────────────────────"
|
||||
|
||||
if ($GsdIsSymlink) {
|
||||
# Scenario C: .gsd IS external — it should be in .gitignore. Add if missing.
|
||||
# Scenario C: .sf IS external — it should be in .gitignore. Add if missing.
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Info 'Adding ".gsd" to .gitignore (migration complete — .gsd/ is external state)...'
|
||||
Write-Info 'Adding ".sf" to .gitignore (migration complete — .sf/ is external state)...'
|
||||
if ($DryRun) {
|
||||
Write-Host " (dry-run) Would append: .gsd" -ForegroundColor Yellow
|
||||
Write-Host " (dry-run) Would append: .sf" -ForegroundColor Yellow
|
||||
} else {
|
||||
$appendLines = @('', '# SF external state (symlink/junction — added by recover-gsd-1364)', '.gsd')
|
||||
$appendLines = @('', '# SF external state (symlink/junction — added by recover-sf-1364)', '.sf')
|
||||
Add-Content -LiteralPath $gitignorePath -Value $appendLines -Encoding UTF8
|
||||
Write-Ok '".gsd" added to .gitignore.'
|
||||
Write-Ok '".sf" added to .gitignore.'
|
||||
}
|
||||
} else {
|
||||
Write-Ok '".gsd" already in .gitignore — correct for external-state layout.'
|
||||
Write-Ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
}
|
||||
} else {
|
||||
# Scenario A/B: .gsd is a real tracked directory — remove the bad ignore line.
|
||||
# Scenario A/B: .sf is a real tracked directory — remove the bad ignore line.
|
||||
if (-not $gsdIgnoreLine) {
|
||||
Write-Ok '".gsd" not in .gitignore — nothing to fix.'
|
||||
Write-Ok '".sf" not in .gitignore — nothing to fix.'
|
||||
} else {
|
||||
Write-Info 'Removing bare ".gsd" line from .gitignore...'
|
||||
Write-Info 'Removing bare ".sf" line from .gitignore...'
|
||||
if ($DryRun) {
|
||||
Write-Host " (dry-run) Would remove line: .gsd" -ForegroundColor Yellow
|
||||
Write-Host " (dry-run) Would remove line: .sf" -ForegroundColor Yellow
|
||||
} else {
|
||||
# Filter out the exact bare ".gsd" line — preserve all other content including
|
||||
# sub-path patterns like ".gsd/", ".gsd/activity/" and comments
|
||||
$cleaned = $gitignoreLines | Where-Object { $_.Trim() -ne '.gsd' }
|
||||
# Filter out the exact bare ".sf" line — preserve all other content including
|
||||
# sub-path patterns like ".sf/", ".sf/activity/" and comments
|
||||
$cleaned = $gitignoreLines | Where-Object { $_.Trim() -ne '.sf' }
|
||||
# Write with UTF-8 no BOM to match git's expectations
|
||||
[System.IO.File]::WriteAllLines($gitignorePath, $cleaned, [System.Text.UTF8Encoding]::new($false))
|
||||
Write-Ok '".gsd" line removed from .gitignore.'
|
||||
Write-Ok '".sf" line removed from .gitignore.'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -368,18 +368,18 @@ if ($GsdIsSymlink) {
|
|||
Write-Section "── Step 7: Stage recovery changes ──────────────────────────────────"
|
||||
|
||||
if (-not $DryRun) {
|
||||
$changed = Invoke-Git @('status', '--short', '--', '.gsd/', '.gitignore') -AllowFailure
|
||||
$changed = Invoke-Git @('status', '--short', '--', '.sf/', '.gitignore') -AllowFailure
|
||||
if (-not $changed) {
|
||||
Write-Ok "No staged changes — working tree was already clean."
|
||||
} else {
|
||||
if ($GsdIsSymlink) {
|
||||
# Scenario C: git rm --cached already staged the index cleanup.
|
||||
# Only stage .gitignore — adding .gsd/ would fail (now gitignored).
|
||||
# Only stage .gitignore — adding .sf/ would fail (now gitignored).
|
||||
Invoke-Git @('add', '.gitignore') -AllowFailure | Out-Null
|
||||
} else {
|
||||
Invoke-Git @('add', '.gsd/', '.gitignore') -AllowFailure | Out-Null
|
||||
Invoke-Git @('add', '.sf/', '.gitignore') -AllowFailure | Out-Null
|
||||
}
|
||||
$stagedRaw = Invoke-Git @('diff', '--cached', '--name-only', '--', '.gsd/', '.gitignore') -AllowFailure
|
||||
$stagedRaw = Invoke-Git @('diff', '--cached', '--name-only', '--', '.sf/', '.gitignore') -AllowFailure
|
||||
$stagedFiles = if ($stagedRaw) { $stagedRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
Write-Ok "$($stagedFiles.Count) file(s) staged and ready to commit."
|
||||
}
|
||||
|
|
@ -392,16 +392,16 @@ Write-Section "── Summary ────────────────
|
|||
if ($DryRun) {
|
||||
Write-Host "Dry-run complete. Re-run without -DryRun to apply changes." -ForegroundColor Yellow
|
||||
} else {
|
||||
$finalStagedRaw = Invoke-Git @('diff', '--cached', '--name-only', '--', '.gsd/', '.gitignore') -AllowFailure
|
||||
$finalStagedRaw = Invoke-Git @('diff', '--cached', '--name-only', '--', '.sf/', '.gitignore') -AllowFailure
|
||||
$finalStaged = if ($finalStagedRaw) { $finalStagedRaw -split "`n" | Where-Object { $_ } } else { @() }
|
||||
|
||||
if ($finalStaged.Count -gt 0) {
|
||||
Write-Host "Recovery complete. Commit with:" -ForegroundColor Green
|
||||
Write-Host ""
|
||||
if ($GsdIsSymlink) {
|
||||
Write-Host ' git commit -m "fix: clean stale .gsd/ index entries after external-state migration"'
|
||||
Write-Host ' git commit -m "fix: clean stale .sf/ index entries after external-state migration"'
|
||||
} else {
|
||||
Write-Host ' git commit -m "fix: restore .gsd/ files deleted by #1364 regression"'
|
||||
Write-Host ' git commit -m "fix: restore .sf/ files deleted by #1364 regression"'
|
||||
}
|
||||
Write-Host ""
|
||||
Write-Host "Staged files:"
|
||||
|
|
|
|||
|
|
@ -1,23 +1,23 @@
|
|||
#!/usr/bin/env bash
|
||||
# recover-gsd-1364.sh — Recovery script for issue #1364 (Linux / macOS)
|
||||
# recover-sf-1364.sh — Recovery script for issue #1364 (Linux / macOS)
|
||||
#
|
||||
# For Windows use the PowerShell equivalent:
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-gsd-1364.ps1 [-DryRun]
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-sf-1364.ps1 [-DryRun]
|
||||
#
|
||||
# CRITICAL DATA-LOSS BUG: SF versions 2.30.0–2.35.x unconditionally added
|
||||
# ".gsd" to .gitignore via ensureGitignore(), causing git to report all
|
||||
# tracked .gsd/ files as deleted. Fixed in v2.36.0 (PR #1367).
|
||||
# ".sf" to .gitignore via ensureGitignore(), causing git to report all
|
||||
# tracked .sf/ files as deleted. Fixed in v2.36.0 (PR #1367).
|
||||
# Three residual vectors remain on v2.36.0–v2.38.0 — see PR #1635 for details.
|
||||
#
|
||||
# This script:
|
||||
# 1. Detects whether the repo was affected
|
||||
# 2. Finds the last clean commit before the damage
|
||||
# 3. Restores all deleted .gsd/ files from that commit
|
||||
# 4. Removes the bad ".gsd" line from .gitignore (if .gsd/ is tracked)
|
||||
# 3. Restores all deleted .sf/ files from that commit
|
||||
# 4. Removes the bad ".sf" line from .gitignore (if .sf/ is tracked)
|
||||
# 5. Prints a ready-to-commit summary
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/recover-gsd-1364.sh [--dry-run]
|
||||
# bash scripts/recover-sf-1364.sh [--dry-run]
|
||||
#
|
||||
# Options:
|
||||
# --dry-run Show what would be done without making any changes
|
||||
|
|
@ -84,30 +84,30 @@ if $DRY_RUN; then
|
|||
warn "DRY-RUN mode — no changes will be made."
|
||||
fi
|
||||
|
||||
# ─── Step 1: Check if .gsd/ exists ────────────────────────────────────────────
|
||||
# ─── Step 1: Check if .sf/ exists ────────────────────────────────────────────
|
||||
|
||||
section "── Step 1: Detect .gsd/ directory ────────────────────────────────────"
|
||||
section "── Step 1: Detect .sf/ directory ────────────────────────────────────"
|
||||
|
||||
SF_DIR="$REPO_ROOT/.gsd"
|
||||
SF_DIR="$REPO_ROOT/.sf"
|
||||
SF_IS_SYMLINK=false
|
||||
|
||||
if [[ ! -e "$SF_DIR" ]]; then
|
||||
ok ".gsd/ does not exist in this repo — not affected."
|
||||
ok ".sf/ does not exist in this repo — not affected."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ -L "$SF_DIR" ]]; then
|
||||
# Scenario C: migration succeeded (symlink in place) but git index was never
|
||||
# cleaned — tracked .gsd/* files still appear as deleted through the symlink.
|
||||
# cleaned — tracked .sf/* files still appear as deleted through the symlink.
|
||||
SF_IS_SYMLINK=true
|
||||
warn ".gsd/ is a symlink — checking for stale git index entries (Scenario C)..."
|
||||
warn ".sf/ is a symlink — checking for stale git index entries (Scenario C)..."
|
||||
else
|
||||
info ".gsd/ is a real directory (Scenario A/B)."
|
||||
info ".sf/ is a real directory (Scenario A/B)."
|
||||
fi
|
||||
|
||||
# ─── Step 2: Check if .gsd is in .gitignore ───────────────────────────────────
|
||||
# ─── Step 2: Check if .sf is in .gitignore ───────────────────────────────────
|
||||
|
||||
section "── Step 2: Check .gitignore for .gsd entry ────────────────────────────"
|
||||
section "── Step 2: Check .gitignore for .sf entry ────────────────────────────"
|
||||
|
||||
GITIGNORE="$REPO_ROOT/.gitignore"
|
||||
|
||||
|
|
@ -116,13 +116,13 @@ if [[ ! -f "$GITIGNORE" ]] && ! $SF_IS_SYMLINK; then
|
|||
exit 0
|
||||
fi
|
||||
|
||||
# Look for a bare ".gsd" line (not a comment, not a sub-path like .gsd/)
|
||||
# Look for a bare ".sf" line (not a comment, not a sub-path like .sf/)
|
||||
SF_IGNORE_LINE=""
|
||||
if [[ -f "$GITIGNORE" ]]; then
|
||||
while IFS= read -r line; do
|
||||
trimmed="${line#"${line%%[![:space:]]*}"}"
|
||||
trimmed="${trimmed%"${trimmed##*[![:space:]]}"}"
|
||||
if [[ "$trimmed" == ".gsd" ]] && [[ "${trimmed:0:1}" != "#" ]]; then
|
||||
if [[ "$trimmed" == ".sf" ]] && [[ "${trimmed:0:1}" != "#" ]]; then
|
||||
SF_IGNORE_LINE="$trimmed"
|
||||
break
|
||||
fi
|
||||
|
|
@ -130,31 +130,31 @@ if [[ -f "$GITIGNORE" ]]; then
|
|||
fi
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Symlink layout: .gsd SHOULD be ignored (it's external state).
|
||||
# Symlink layout: .sf SHOULD be ignored (it's external state).
|
||||
# Missing = needs adding. Present = correct.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
warn '".gsd" missing from .gitignore — will add (migration complete, .gsd/ is external).'
|
||||
warn '".sf" missing from .gitignore — will add (migration complete, .sf/ is external).'
|
||||
else
|
||||
ok '".gsd" already in .gitignore — correct for external-state layout.'
|
||||
ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
fi
|
||||
else
|
||||
# Real-directory layout: .gsd should NOT be ignored.
|
||||
# Real-directory layout: .sf should NOT be ignored.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok '".gsd" not found in .gitignore — .gitignore not affected.'
|
||||
ok '".sf" not found in .gitignore — .gitignore not affected.'
|
||||
else
|
||||
warn '".gsd" found in .gitignore — this is the bad pattern from #1364.'
|
||||
warn '".sf" found in .gitignore — this is the bad pattern from #1364.'
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 3: Find deleted .gsd/ tracked files ─────────────────────────────────
|
||||
# ─── Step 3: Find deleted .sf/ tracked files ─────────────────────────────────
|
||||
|
||||
section "── Step 3: Find deleted .gsd/ files ───────────────────────────────────"
|
||||
section "── Step 3: Find deleted .sf/ files ───────────────────────────────────"
|
||||
|
||||
# Files showing as deleted in the working tree (tracked in index but missing)
|
||||
DELETED_FILES="$(git ls-files --deleted -- '.gsd/*' 2>/dev/null || true)"
|
||||
DELETED_FILES="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
|
||||
# Files tracked in HEAD right now
|
||||
TRACKED_IN_HEAD="$(git ls-tree -r --name-only HEAD -- '.gsd/' 2>/dev/null || true)"
|
||||
TRACKED_IN_HEAD="$(git ls-tree -r --name-only HEAD -- '.sf/' 2>/dev/null || true)"
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: migration succeeded. Files are safe via symlink.
|
||||
|
|
@ -162,49 +162,49 @@ if $SF_IS_SYMLINK; then
|
|||
if [[ -z "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]]; then
|
||||
ok "No stale index entries found — symlink layout is healthy."
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
info "Add .gsd to .gitignore manually to complete the migration."
|
||||
info "Add .sf to .gitignore manually to complete the migration."
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
INDEX_COUNT="$(echo "${TRACKED_IN_HEAD:-$DELETED_FILES}" | wc -l | tr -d ' ')"
|
||||
warn "Scenario C: ${INDEX_COUNT} .gsd/ file(s) tracked in git index but inaccessible through symlink."
|
||||
warn "Scenario C: ${INDEX_COUNT} .sf/ file(s) tracked in git index but inaccessible through symlink."
|
||||
info "Files are safe in external storage — only the git index needs cleaning."
|
||||
else
|
||||
# Files deleted via a committed git rm --cached (Scenario B)
|
||||
DELETED_FROM_HISTORY="$(git log --all --diff-filter=D --name-only --format="" -- '.gsd/*' 2>/dev/null \
|
||||
| grep '^\.gsd' | sort -u || true)"
|
||||
DELETED_FROM_HISTORY="$(git log --all --diff-filter=D --name-only --format="" -- '.sf/*' 2>/dev/null \
|
||||
| grep '^\.sf' | sort -u || true)"
|
||||
|
||||
if [[ -z "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]] && [[ -z "$DELETED_FROM_HISTORY" ]]; then
|
||||
ok "No .gsd/ files tracked in this repo — not affected by #1364."
|
||||
ok "No .sf/ files tracked in this repo — not affected by #1364."
|
||||
if [[ -n "$SF_IGNORE_LINE" ]]; then
|
||||
warn '".gsd" is still in .gitignore but there is nothing to restore.'
|
||||
warn '".sf" is still in .gitignore but there is nothing to restore.'
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ -n "$TRACKED_IN_HEAD" ]]; then
|
||||
TRACKED_COUNT="$(echo "$TRACKED_IN_HEAD" | wc -l | tr -d ' ')"
|
||||
info "Scenario A: ${TRACKED_COUNT} .gsd/ files still tracked in HEAD."
|
||||
info "Scenario A: ${TRACKED_COUNT} .sf/ files still tracked in HEAD."
|
||||
elif [[ -n "$DELETED_FROM_HISTORY" ]]; then
|
||||
DELETED_HIST_COUNT="$(echo "$DELETED_FROM_HISTORY" | wc -l | tr -d ' ')"
|
||||
warn "Scenario B: ${DELETED_HIST_COUNT} .gsd/ file(s) deleted in a committed change:"
|
||||
warn "Scenario B: ${DELETED_HIST_COUNT} .sf/ file(s) deleted in a committed change:"
|
||||
echo "$DELETED_FROM_HISTORY" | head -20 | while IFS= read -r f; do echo " - $f"; done
|
||||
if (( DELETED_HIST_COUNT > 20 )); then echo " ... and $((DELETED_HIST_COUNT - 20)) more"; fi
|
||||
fi
|
||||
|
||||
if [[ -n "$DELETED_FILES" ]]; then
|
||||
DELETED_COUNT="$(echo "$DELETED_FILES" | wc -l | tr -d ' ')"
|
||||
warn "${DELETED_COUNT} .gsd/ file(s) missing from working tree:"
|
||||
warn "${DELETED_COUNT} .sf/ file(s) missing from working tree:"
|
||||
echo "$DELETED_FILES" | head -20 | while IFS= read -r f; do echo " - $f"; done
|
||||
if (( DELETED_COUNT > 20 )); then echo " ... and $((DELETED_COUNT - 20)) more"; fi
|
||||
fi
|
||||
|
||||
if [[ -n "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]]; then
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok "No action needed — .gsd/ is tracked in HEAD and .gitignore is clean."
|
||||
ok "No action needed — .sf/ is tracked in HEAD and .gitignore is clean."
|
||||
exit 0
|
||||
fi
|
||||
info ".gsd/ is tracked in HEAD and working tree is clean — only .gitignore needs fixing."
|
||||
info ".sf/ is tracked in HEAD and working tree is clean — only .gitignore needs fixing."
|
||||
fi
|
||||
fi
|
||||
|
||||
|
|
@ -219,23 +219,23 @@ RESTORABLE=""
|
|||
if $SF_IS_SYMLINK; then
|
||||
info "Scenario C: symlink layout — skipping commit history scan (no file restore needed)."
|
||||
else
|
||||
# Find the commit where ".gsd" was first added to .gitignore
|
||||
# by walking the log and finding the first commit where .gitignore contained ".gsd"
|
||||
info "Scanning git log to find when .gsd was added to .gitignore..."
|
||||
# Find the commit where ".sf" was first added to .gitignore
|
||||
# by walking the log and finding the first commit where .gitignore contained ".sf"
|
||||
info "Scanning git log to find when .sf was added to .gitignore..."
|
||||
|
||||
# Strategy 1: find the first commit that added ".gsd" to .gitignore
|
||||
# Strategy 1: find the first commit that added ".sf" to .gitignore
|
||||
while IFS= read -r sha; do
|
||||
content="$(git show "${sha}:.gitignore" 2>/dev/null || true)"
|
||||
if echo "$content" | grep -qx '\.gsd' 2>/dev/null; then
|
||||
if echo "$content" | grep -qx '\.sf' 2>/dev/null; then
|
||||
DAMAGE_COMMIT="$sha"
|
||||
break
|
||||
fi
|
||||
done < <(git log --format="%H" -- .gitignore)
|
||||
|
||||
# Strategy 2: if .gsd files were committed as deleted, find that commit
|
||||
# Strategy 2: if .sf files were committed as deleted, find that commit
|
||||
if [[ -z "$DAMAGE_COMMIT" ]] && [[ -n "${DELETED_FROM_HISTORY:-}" ]]; then
|
||||
info "Searching for the commit that deleted .gsd/ files from the index..."
|
||||
DAMAGE_COMMIT="$(git log --all --diff-filter=D --format="%H" -- '.gsd/*' 2>/dev/null | head -1 || true)"
|
||||
info "Searching for the commit that deleted .sf/ files from the index..."
|
||||
DAMAGE_COMMIT="$(git log --all --diff-filter=D --format="%H" -- '.sf/*' 2>/dev/null | head -1 || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "$DAMAGE_COMMIT" ]]; then
|
||||
|
|
@ -248,14 +248,14 @@ else
|
|||
info "Restoring from: $CLEAN_COMMIT — $CLEAN_MSG"
|
||||
fi
|
||||
|
||||
# Verify the clean commit actually has .gsd/ files
|
||||
RESTORABLE="$(git ls-tree -r --name-only "$CLEAN_COMMIT" -- '.gsd/' 2>/dev/null || true)"
|
||||
# Verify the clean commit actually has .sf/ files
|
||||
RESTORABLE="$(git ls-tree -r --name-only "$CLEAN_COMMIT" -- '.sf/' 2>/dev/null || true)"
|
||||
if [[ -z "$RESTORABLE" ]]; then
|
||||
die "No .gsd/ files found in restore point $CLEAN_COMMIT — cannot recover. Check git log manually."
|
||||
die "No .sf/ files found in restore point $CLEAN_COMMIT — cannot recover. Check git log manually."
|
||||
fi
|
||||
|
||||
RESTORABLE_COUNT="$(echo "$RESTORABLE" | wc -l | tr -d ' ')"
|
||||
ok "Restore point has ${RESTORABLE_COUNT} .gsd/ files available."
|
||||
ok "Restore point has ${RESTORABLE_COUNT} .sf/ files available."
|
||||
fi
|
||||
|
||||
# ─── Step 5: Clean index (Scenario C) or restore deleted files (Scenario A/B) ─
|
||||
|
|
@ -263,18 +263,18 @@ fi
|
|||
if $SF_IS_SYMLINK; then
|
||||
section "── Step 5: Clean stale git index entries ───────────────────────────────"
|
||||
|
||||
info "Running: git rm -r --cached --ignore-unmatch .gsd/ ..."
|
||||
run "git rm -r --cached --ignore-unmatch .gsd"
|
||||
info "Running: git rm -r --cached --ignore-unmatch .sf/ ..."
|
||||
run "git rm -r --cached --ignore-unmatch .sf"
|
||||
if ! $DRY_RUN; then
|
||||
STILL_STALE="$(git ls-files --deleted -- '.gsd/*' 2>/dev/null || true)"
|
||||
STILL_STALE="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
if [[ -z "$STILL_STALE" ]]; then
|
||||
ok "Git index cleaned — no stale .gsd/ entries remain."
|
||||
ok "Git index cleaned — no stale .sf/ entries remain."
|
||||
else
|
||||
warn "$(echo "$STILL_STALE" | wc -l | tr -d ' ') stale entr(ies) still present — may need manual cleanup."
|
||||
fi
|
||||
fi
|
||||
else
|
||||
section "── Step 5: Restore deleted .gsd/ files ────────────────────────────────"
|
||||
section "── Step 5: Restore deleted .sf/ files ────────────────────────────────"
|
||||
|
||||
NEEDS_RESTORE=false
|
||||
[[ -n "$DELETED_FILES" ]] && NEEDS_RESTORE=true
|
||||
|
|
@ -283,12 +283,12 @@ else
|
|||
if ! $NEEDS_RESTORE; then
|
||||
ok "No deleted files to restore — skipping."
|
||||
else
|
||||
info "Restoring .gsd/ files from $CLEAN_COMMIT..."
|
||||
run "git checkout \"$CLEAN_COMMIT\" -- .gsd/"
|
||||
info "Restoring .sf/ files from $CLEAN_COMMIT..."
|
||||
run "git checkout \"$CLEAN_COMMIT\" -- .sf/"
|
||||
if ! $DRY_RUN; then
|
||||
STILL_MISSING="$(git ls-files --deleted -- '.gsd/*' 2>/dev/null || true)"
|
||||
STILL_MISSING="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
if [[ -z "$STILL_MISSING" ]]; then
|
||||
ok "All .gsd/ files restored successfully."
|
||||
ok "All .sf/ files restored successfully."
|
||||
else
|
||||
MISS_COUNT="$(echo "$STILL_MISSING" | wc -l | tr -d ' ')"
|
||||
warn "${MISS_COUNT} file(s) still missing after restore — may need manual recovery:"
|
||||
|
|
@ -303,33 +303,33 @@ fi
|
|||
section "── Step 6: Fix .gitignore ───────────────────────────────────────────────"
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: .gsd IS external — it should be in .gitignore. Add if missing.
|
||||
# Scenario C: .sf IS external — it should be in .gitignore. Add if missing.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
info 'Adding ".gsd" to .gitignore (migration complete — .gsd/ is external state)...'
|
||||
info 'Adding ".sf" to .gitignore (migration complete — .sf/ is external state)...'
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would append: .gsd"
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would append: .sf"
|
||||
else
|
||||
printf '\n# SF external state (symlink — added by recover-gsd-1364)\n.gsd\n' >> "$GITIGNORE"
|
||||
ok '".gsd" added to .gitignore.'
|
||||
printf '\n# SF external state (symlink — added by recover-sf-1364)\n.sf\n' >> "$GITIGNORE"
|
||||
ok '".sf" added to .gitignore.'
|
||||
fi
|
||||
else
|
||||
ok '".gsd" already in .gitignore — correct for external-state layout.'
|
||||
ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
fi
|
||||
else
|
||||
# Scenario A/B: .gsd is a real tracked directory — remove the bad ignore line.
|
||||
# Scenario A/B: .sf is a real tracked directory — remove the bad ignore line.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok '".gsd" not in .gitignore — nothing to fix.'
|
||||
ok '".sf" not in .gitignore — nothing to fix.'
|
||||
else
|
||||
info 'Removing bare ".gsd" line from .gitignore...'
|
||||
info 'Removing bare ".sf" line from .gitignore...'
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would remove line: .gsd"
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would remove line: .sf"
|
||||
else
|
||||
# Remove the exact line ".gsd" (not comments, not .gsd/ subdirs)
|
||||
# Remove the exact line ".sf" (not comments, not .sf/ subdirs)
|
||||
# Use a temp file for portability (no sed -i on all platforms)
|
||||
TMP="$(mktemp)"
|
||||
grep -v '^\.gsd$' "$GITIGNORE" > "$TMP" || true
|
||||
grep -v '^\.sf$' "$GITIGNORE" > "$TMP" || true
|
||||
mv "$TMP" "$GITIGNORE"
|
||||
ok '".gsd" line removed from .gitignore.'
|
||||
ok '".sf" line removed from .gitignore.'
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
|
@ -339,18 +339,18 @@ fi
|
|||
section "── Step 7: Stage recovery changes ──────────────────────────────────────"
|
||||
|
||||
if ! $DRY_RUN; then
|
||||
CHANGED="$(git status --short -- '.gsd/' .gitignore 2>/dev/null || true)"
|
||||
CHANGED="$(git status --short -- '.sf/' .gitignore 2>/dev/null || true)"
|
||||
if [[ -z "$CHANGED" ]]; then
|
||||
ok "No staged changes — working tree was already clean."
|
||||
else
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: the git rm --cached already staged the index cleanup.
|
||||
# Only stage .gitignore — adding .gsd/ would fail (now gitignored).
|
||||
# Only stage .gitignore — adding .sf/ would fail (now gitignored).
|
||||
git add .gitignore 2>/dev/null || true
|
||||
else
|
||||
git add .gsd/ .gitignore 2>/dev/null || true
|
||||
git add .sf/ .gitignore 2>/dev/null || true
|
||||
fi
|
||||
STAGED_COUNT="$(git diff --cached --name-only -- '.gsd/' .gitignore | wc -l | tr -d ' ')"
|
||||
STAGED_COUNT="$(git diff --cached --name-only -- '.sf/' .gitignore | wc -l | tr -d ' ')"
|
||||
ok "${STAGED_COUNT} file(s) staged and ready to commit."
|
||||
fi
|
||||
fi
|
||||
|
|
@ -362,21 +362,21 @@ section "── Summary ──────────────────
|
|||
if $DRY_RUN; then
|
||||
echo -e "${YELLOW}Dry-run complete. Re-run without --dry-run to apply changes.${RESET}"
|
||||
else
|
||||
FINAL_STAGED="$(git diff --cached --name-only -- '.gsd/' .gitignore 2>/dev/null | wc -l | tr -d ' ')"
|
||||
FINAL_STAGED="$(git diff --cached --name-only -- '.sf/' .gitignore 2>/dev/null | wc -l | tr -d ' ')"
|
||||
if (( FINAL_STAGED > 0 )); then
|
||||
echo -e "${GREEN}Recovery complete. Commit with:${RESET}"
|
||||
echo ""
|
||||
if $SF_IS_SYMLINK; then
|
||||
echo " git commit -m \"fix: clean stale .gsd/ index entries after external-state migration\""
|
||||
echo " git commit -m \"fix: clean stale .sf/ index entries after external-state migration\""
|
||||
else
|
||||
echo " git commit -m \"fix: restore .gsd/ files deleted by #1364 regression\""
|
||||
echo " git commit -m \"fix: restore .sf/ files deleted by #1364 regression\""
|
||||
fi
|
||||
echo ""
|
||||
echo "Staged files:"
|
||||
git diff --cached --name-only -- '.gsd/' .gitignore | head -20 | while IFS= read -r f; do
|
||||
git diff --cached --name-only -- '.sf/' .gitignore | head -20 | while IFS= read -r f; do
|
||||
echo " + $f"
|
||||
done
|
||||
TOTAL_STAGED="$(git diff --cached --name-only -- '.gsd/' .gitignore | wc -l | tr -d ' ')"
|
||||
TOTAL_STAGED="$(git diff --cached --name-only -- '.sf/' .gitignore | wc -l | tr -d ' ')"
|
||||
if (( TOTAL_STAGED > 20 )); then
|
||||
echo " ... and $((TOTAL_STAGED - 20)) more"
|
||||
fi
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# recover-gsd-1668.ps1 — Recovery script for issue #1668 (Windows)
|
||||
# recover-sf-1668.ps1 — Recovery script for issue #1668 (Windows)
|
||||
#
|
||||
# SF v2.39.x deleted the milestone branch and worktree directory when a
|
||||
# merge failed due to the repo using `master` as its default branch (not
|
||||
|
|
@ -13,7 +13,7 @@
|
|||
# 5. Reports what was found and how to complete the merge manually
|
||||
#
|
||||
# Usage:
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-gsd-1668.ps1 [-MilestoneId <ID>] [-DryRun] [-Auto]
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-sf-1668.ps1 [-MilestoneId <ID>] [-DryRun] [-Auto]
|
||||
#
|
||||
# Options:
|
||||
# -MilestoneId <ID> SF milestone ID (e.g. M001-g2nalq).
|
||||
|
|
@ -295,9 +295,9 @@ if (-not $DryRun) {
|
|||
|
||||
if (-not $DryRun) {
|
||||
Section "── Step 6: Verify recovery branch ──────────────────────────────────────"
|
||||
$fileList = & git ls-tree -r --name-only $recoveryBranch 2>/dev/null | Where-Object { $_ -notmatch '^\.gsd/' }
|
||||
$fileList = & git ls-tree -r --name-only $recoveryBranch 2>/dev/null | Where-Object { $_ -notmatch '^\.sf/' }
|
||||
$fileCount = @($fileList).Count
|
||||
Info "Files recoverable (excluding .gsd/ state files): $fileCount"
|
||||
Info "Files recoverable (excluding .sf/ state files): $fileCount"
|
||||
$fileList | Select-Object -First 30 | ForEach-Object { Write-Host " $_" }
|
||||
if ($fileCount -gt 30) { Dim " ... and $($fileCount - 30) more" }
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
#!/usr/bin/env bash
|
||||
# recover-gsd-1668.sh — Recovery script for issue #1668 (Linux / macOS)
|
||||
# recover-sf-1668.sh — Recovery script for issue #1668 (Linux / macOS)
|
||||
#
|
||||
# SF v2.39.x deleted the milestone branch and worktree directory when a
|
||||
# merge failed due to the repo using `master` as its default branch (not
|
||||
|
|
@ -14,7 +14,7 @@
|
|||
# 5. Reports what was found and how to complete the merge manually
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/recover-gsd-1668.sh [--milestone <ID>] [--dry-run] [--auto]
|
||||
# bash scripts/recover-sf-1668.sh [--milestone <ID>] [--dry-run] [--auto]
|
||||
#
|
||||
# Options:
|
||||
# --milestone <ID> SF milestone ID (e.g. M001-g2nalq).
|
||||
|
|
@ -398,10 +398,10 @@ fi
|
|||
if ! $DRY_RUN; then
|
||||
section "── Step 6: Verify recovery branch ──────────────────────────────────────"
|
||||
|
||||
FILE_LIST="$(git ls-tree -r --name-only "${RECOVERY_BRANCH}" 2>/dev/null | grep -v '^\.gsd/' || true)"
|
||||
FILE_LIST="$(git ls-tree -r --name-only "${RECOVERY_BRANCH}" 2>/dev/null | grep -v '^\.sf/' || true)"
|
||||
FILE_COUNT="$(echo "$FILE_LIST" | grep -c . || true)"
|
||||
|
||||
info "Files recoverable (excluding .gsd/ state files): ${FILE_COUNT}"
|
||||
info "Files recoverable (excluding .sf/ state files): ${FILE_COUNT}"
|
||||
echo "$FILE_LIST" | head -30 | while IFS= read -r f; do echo " $f"; done
|
||||
if [[ "$FILE_COUNT" -gt 30 ]]; then
|
||||
dim " ... and $((FILE_COUNT - 30)) more"
|
||||
|
|
|
|||
386
scripts/recover-sf-1364.sh
Executable file
386
scripts/recover-sf-1364.sh
Executable file
|
|
@ -0,0 +1,386 @@
|
|||
#!/usr/bin/env bash
|
||||
# recover-sf-1364.sh — Recovery script for issue #1364 (Linux / macOS)
|
||||
#
|
||||
# For Windows use the PowerShell equivalent:
|
||||
# powershell -ExecutionPolicy Bypass -File scripts\recover-sf-1364.ps1 [-DryRun]
|
||||
#
|
||||
# CRITICAL DATA-LOSS BUG: SF versions 2.30.0–2.35.x unconditionally added
|
||||
# ".sf" to .gitignore via ensureGitignore(), causing git to report all
|
||||
# tracked .sf/ files as deleted. Fixed in v2.36.0 (PR #1367).
|
||||
# Three residual vectors remain on v2.36.0–v2.38.0 — see PR #1635 for details.
|
||||
#
|
||||
# This script:
|
||||
# 1. Detects whether the repo was affected
|
||||
# 2. Finds the last clean commit before the damage
|
||||
# 3. Restores all deleted .sf/ files from that commit
|
||||
# 4. Removes the bad ".sf" line from .gitignore (if .sf/ is tracked)
|
||||
# 5. Prints a ready-to-commit summary
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/recover-sf-1364.sh [--dry-run]
|
||||
#
|
||||
# Options:
|
||||
# --dry-run Show what would be done without making any changes
|
||||
#
|
||||
# Requirements: git >= 2.x, bash >= 4.x
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ─── Colours ──────────────────────────────────────────────────────────────────
|
||||
|
||||
RED='\033[0;31m'
|
||||
YELLOW='\033[1;33m'
|
||||
GREEN='\033[0;32m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
RESET='\033[0m'
|
||||
|
||||
# ─── Args ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
DRY_RUN=false
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--dry-run) DRY_RUN=true ;;
|
||||
*) echo "Unknown argument: $arg" >&2; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
info() { echo -e "${CYAN}[info]${RESET} $*"; }
|
||||
ok() { echo -e "${GREEN}[ok]${RESET} $*"; }
|
||||
warn() { echo -e "${YELLOW}[warn]${RESET} $*"; }
|
||||
error() { echo -e "${RED}[error]${RESET} $*" >&2; }
|
||||
section() { echo -e "\n${BOLD}$*${RESET}"; }
|
||||
|
||||
die() {
|
||||
error "$*"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Run or print-only depending on --dry-run
|
||||
run() {
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} $*"
|
||||
else
|
||||
eval "$*"
|
||||
fi
|
||||
}
|
||||
|
||||
# ─── Preflight ────────────────────────────────────────────────────────────────
|
||||
|
||||
section "── Preflight ───────────────────────────────────────────────────────"
|
||||
|
||||
# Must be run from a git repo root
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
die "Not inside a git repository. Run this from your project root."
|
||||
fi
|
||||
|
||||
REPO_ROOT="$(git rev-parse --show-toplevel)"
|
||||
cd "$REPO_ROOT"
|
||||
info "Repo root: $REPO_ROOT"
|
||||
|
||||
if $DRY_RUN; then
|
||||
warn "DRY-RUN mode — no changes will be made."
|
||||
fi
|
||||
|
||||
# ─── Step 1: Check if .sf/ exists ────────────────────────────────────────────
|
||||
|
||||
section "── Step 1: Detect .sf/ directory ────────────────────────────────────"
|
||||
|
||||
SF_DIR="$REPO_ROOT/.sf"
|
||||
SF_IS_SYMLINK=false
|
||||
|
||||
if [[ ! -e "$SF_DIR" ]]; then
|
||||
ok ".sf/ does not exist in this repo — not affected."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ -L "$SF_DIR" ]]; then
|
||||
# Scenario C: migration succeeded (symlink in place) but git index was never
|
||||
# cleaned — tracked .sf/* files still appear as deleted through the symlink.
|
||||
SF_IS_SYMLINK=true
|
||||
warn ".sf/ is a symlink — checking for stale git index entries (Scenario C)..."
|
||||
else
|
||||
info ".sf/ is a real directory (Scenario A/B)."
|
||||
fi
|
||||
|
||||
# ─── Step 2: Check if .sf is in .gitignore ───────────────────────────────────
|
||||
|
||||
section "── Step 2: Check .gitignore for .sf entry ────────────────────────────"
|
||||
|
||||
GITIGNORE="$REPO_ROOT/.gitignore"
|
||||
|
||||
if [[ ! -f "$GITIGNORE" ]] && ! $SF_IS_SYMLINK; then
|
||||
ok ".gitignore does not exist — not affected."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Look for a bare ".sf" line (not a comment, not a sub-path like .sf/)
|
||||
SF_IGNORE_LINE=""
|
||||
if [[ -f "$GITIGNORE" ]]; then
|
||||
while IFS= read -r line; do
|
||||
trimmed="${line#"${line%%[![:space:]]*}"}"
|
||||
trimmed="${trimmed%"${trimmed##*[![:space:]]}"}"
|
||||
if [[ "$trimmed" == ".sf" ]] && [[ "${trimmed:0:1}" != "#" ]]; then
|
||||
SF_IGNORE_LINE="$trimmed"
|
||||
break
|
||||
fi
|
||||
done < "$GITIGNORE"
|
||||
fi
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Symlink layout: .sf SHOULD be ignored (it's external state).
|
||||
# Missing = needs adding. Present = correct.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
warn '".sf" missing from .gitignore — will add (migration complete, .sf/ is external).'
|
||||
else
|
||||
ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
fi
|
||||
else
|
||||
# Real-directory layout: .sf should NOT be ignored.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok '".sf" not found in .gitignore — .gitignore not affected.'
|
||||
else
|
||||
warn '".sf" found in .gitignore — this is the bad pattern from #1364.'
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 3: Find deleted .sf/ tracked files ─────────────────────────────────
|
||||
|
||||
section "── Step 3: Find deleted .sf/ files ───────────────────────────────────"
|
||||
|
||||
# Files showing as deleted in the working tree (tracked in index but missing)
|
||||
DELETED_FILES="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
|
||||
# Files tracked in HEAD right now
|
||||
TRACKED_IN_HEAD="$(git ls-tree -r --name-only HEAD -- '.sf/' 2>/dev/null || true)"
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: migration succeeded. Files are safe via symlink.
|
||||
# Only index entries can be stale — no need to scan commit history.
|
||||
if [[ -z "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]]; then
|
||||
ok "No stale index entries found — symlink layout is healthy."
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
info "Add .sf to .gitignore manually to complete the migration."
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
INDEX_COUNT="$(echo "${TRACKED_IN_HEAD:-$DELETED_FILES}" | wc -l | tr -d ' ')"
|
||||
warn "Scenario C: ${INDEX_COUNT} .sf/ file(s) tracked in git index but inaccessible through symlink."
|
||||
info "Files are safe in external storage — only the git index needs cleaning."
|
||||
else
|
||||
# Files deleted via a committed git rm --cached (Scenario B)
|
||||
DELETED_FROM_HISTORY="$(git log --all --diff-filter=D --name-only --format="" -- '.sf/*' 2>/dev/null \
|
||||
| grep '^\.sf' | sort -u || true)"
|
||||
|
||||
if [[ -z "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]] && [[ -z "$DELETED_FROM_HISTORY" ]]; then
|
||||
ok "No .sf/ files tracked in this repo — not affected by #1364."
|
||||
if [[ -n "$SF_IGNORE_LINE" ]]; then
|
||||
warn '".sf" is still in .gitignore but there is nothing to restore.'
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ -n "$TRACKED_IN_HEAD" ]]; then
|
||||
TRACKED_COUNT="$(echo "$TRACKED_IN_HEAD" | wc -l | tr -d ' ')"
|
||||
info "Scenario A: ${TRACKED_COUNT} .sf/ files still tracked in HEAD."
|
||||
elif [[ -n "$DELETED_FROM_HISTORY" ]]; then
|
||||
DELETED_HIST_COUNT="$(echo "$DELETED_FROM_HISTORY" | wc -l | tr -d ' ')"
|
||||
warn "Scenario B: ${DELETED_HIST_COUNT} .sf/ file(s) deleted in a committed change:"
|
||||
echo "$DELETED_FROM_HISTORY" | head -20 | while IFS= read -r f; do echo " - $f"; done
|
||||
if (( DELETED_HIST_COUNT > 20 )); then echo " ... and $((DELETED_HIST_COUNT - 20)) more"; fi
|
||||
fi
|
||||
|
||||
if [[ -n "$DELETED_FILES" ]]; then
|
||||
DELETED_COUNT="$(echo "$DELETED_FILES" | wc -l | tr -d ' ')"
|
||||
warn "${DELETED_COUNT} .sf/ file(s) missing from working tree:"
|
||||
echo "$DELETED_FILES" | head -20 | while IFS= read -r f; do echo " - $f"; done
|
||||
if (( DELETED_COUNT > 20 )); then echo " ... and $((DELETED_COUNT - 20)) more"; fi
|
||||
fi
|
||||
|
||||
if [[ -n "$TRACKED_IN_HEAD" ]] && [[ -z "$DELETED_FILES" ]]; then
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok "No action needed — .sf/ is tracked in HEAD and .gitignore is clean."
|
||||
exit 0
|
||||
fi
|
||||
info ".sf/ is tracked in HEAD and working tree is clean — only .gitignore needs fixing."
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 4: Find the last clean commit (Scenario A/B only) ───────────────────
|
||||
|
||||
section "── Step 4: Find last clean commit ──────────────────────────────────────"
|
||||
|
||||
DAMAGE_COMMIT=""
|
||||
CLEAN_COMMIT=""
|
||||
RESTORABLE=""
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
info "Scenario C: symlink layout — skipping commit history scan (no file restore needed)."
|
||||
else
|
||||
# Find the commit where ".sf" was first added to .gitignore
|
||||
# by walking the log and finding the first commit where .gitignore contained ".sf"
|
||||
info "Scanning git log to find when .sf was added to .gitignore..."
|
||||
|
||||
# Strategy 1: find the first commit that added ".sf" to .gitignore
|
||||
while IFS= read -r sha; do
|
||||
content="$(git show "${sha}:.gitignore" 2>/dev/null || true)"
|
||||
if echo "$content" | grep -qx '\.sf' 2>/dev/null; then
|
||||
DAMAGE_COMMIT="$sha"
|
||||
break
|
||||
fi
|
||||
done < <(git log --format="%H" -- .gitignore)
|
||||
|
||||
# Strategy 2: if .sf files were committed as deleted, find that commit
|
||||
if [[ -z "$DAMAGE_COMMIT" ]] && [[ -n "${DELETED_FROM_HISTORY:-}" ]]; then
|
||||
info "Searching for the commit that deleted .sf/ files from the index..."
|
||||
DAMAGE_COMMIT="$(git log --all --diff-filter=D --format="%H" -- '.sf/*' 2>/dev/null | head -1 || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "$DAMAGE_COMMIT" ]]; then
|
||||
warn "Could not pinpoint the damage commit — falling back to HEAD."
|
||||
CLEAN_COMMIT="HEAD"
|
||||
else
|
||||
info "Damage commit: $DAMAGE_COMMIT ($(git log --format='%s' -1 "$DAMAGE_COMMIT"))"
|
||||
CLEAN_COMMIT="${DAMAGE_COMMIT}^"
|
||||
CLEAN_MSG="$(git log --format='%s' -1 "$CLEAN_COMMIT" 2>/dev/null || echo "unknown")"
|
||||
info "Restoring from: $CLEAN_COMMIT — $CLEAN_MSG"
|
||||
fi
|
||||
|
||||
# Verify the clean commit actually has .sf/ files
|
||||
RESTORABLE="$(git ls-tree -r --name-only "$CLEAN_COMMIT" -- '.sf/' 2>/dev/null || true)"
|
||||
if [[ -z "$RESTORABLE" ]]; then
|
||||
die "No .sf/ files found in restore point $CLEAN_COMMIT — cannot recover. Check git log manually."
|
||||
fi
|
||||
|
||||
RESTORABLE_COUNT="$(echo "$RESTORABLE" | wc -l | tr -d ' ')"
|
||||
ok "Restore point has ${RESTORABLE_COUNT} .sf/ files available."
|
||||
fi
|
||||
|
||||
# ─── Step 5: Clean index (Scenario C) or restore deleted files (Scenario A/B) ─
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
section "── Step 5: Clean stale git index entries ───────────────────────────────"
|
||||
|
||||
info "Running: git rm -r --cached --ignore-unmatch .sf/ ..."
|
||||
run "git rm -r --cached --ignore-unmatch .sf"
|
||||
if ! $DRY_RUN; then
|
||||
STILL_STALE="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
if [[ -z "$STILL_STALE" ]]; then
|
||||
ok "Git index cleaned — no stale .sf/ entries remain."
|
||||
else
|
||||
warn "$(echo "$STILL_STALE" | wc -l | tr -d ' ') stale entr(ies) still present — may need manual cleanup."
|
||||
fi
|
||||
fi
|
||||
else
|
||||
section "── Step 5: Restore deleted .sf/ files ────────────────────────────────"
|
||||
|
||||
NEEDS_RESTORE=false
|
||||
[[ -n "$DELETED_FILES" ]] && NEEDS_RESTORE=true
|
||||
[[ -n "${DELETED_FROM_HISTORY:-}" ]] && [[ -z "$TRACKED_IN_HEAD" ]] && NEEDS_RESTORE=true
|
||||
|
||||
if ! $NEEDS_RESTORE; then
|
||||
ok "No deleted files to restore — skipping."
|
||||
else
|
||||
info "Restoring .sf/ files from $CLEAN_COMMIT..."
|
||||
run "git checkout \"$CLEAN_COMMIT\" -- .sf/"
|
||||
if ! $DRY_RUN; then
|
||||
STILL_MISSING="$(git ls-files --deleted -- '.sf/*' 2>/dev/null || true)"
|
||||
if [[ -z "$STILL_MISSING" ]]; then
|
||||
ok "All .sf/ files restored successfully."
|
||||
else
|
||||
MISS_COUNT="$(echo "$STILL_MISSING" | wc -l | tr -d ' ')"
|
||||
warn "${MISS_COUNT} file(s) still missing after restore — may need manual recovery:"
|
||||
echo "$STILL_MISSING" | head -10 | while IFS= read -r f; do echo " - $f"; done
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 6: Fix .gitignore ───────────────────────────────────────────────────
|
||||
|
||||
section "── Step 6: Fix .gitignore ───────────────────────────────────────────────"
|
||||
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: .sf IS external — it should be in .gitignore. Add if missing.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
info 'Adding ".sf" to .gitignore (migration complete — .sf/ is external state)...'
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would append: .sf"
|
||||
else
|
||||
printf '\n# SF external state (symlink — added by recover-sf-1364)\n.sf\n' >> "$GITIGNORE"
|
||||
ok '".sf" added to .gitignore.'
|
||||
fi
|
||||
else
|
||||
ok '".sf" already in .gitignore — correct for external-state layout.'
|
||||
fi
|
||||
else
|
||||
# Scenario A/B: .sf is a real tracked directory — remove the bad ignore line.
|
||||
if [[ -z "$SF_IGNORE_LINE" ]]; then
|
||||
ok '".sf" not in .gitignore — nothing to fix.'
|
||||
else
|
||||
info 'Removing bare ".sf" line from .gitignore...'
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} Would remove line: .sf"
|
||||
else
|
||||
# Remove the exact line ".sf" (not comments, not .sf/ subdirs)
|
||||
# Use a temp file for portability (no sed -i on all platforms)
|
||||
TMP="$(mktemp)"
|
||||
grep -v '^\.sf$' "$GITIGNORE" > "$TMP" || true
|
||||
mv "$TMP" "$GITIGNORE"
|
||||
ok '".sf" line removed from .gitignore.'
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 7: Stage changes ────────────────────────────────────────────────────
|
||||
|
||||
section "── Step 7: Stage recovery changes ──────────────────────────────────────"
|
||||
|
||||
if ! $DRY_RUN; then
|
||||
CHANGED="$(git status --short -- '.sf/' .gitignore 2>/dev/null || true)"
|
||||
if [[ -z "$CHANGED" ]]; then
|
||||
ok "No staged changes — working tree was already clean."
|
||||
else
|
||||
if $SF_IS_SYMLINK; then
|
||||
# Scenario C: the git rm --cached already staged the index cleanup.
|
||||
# Only stage .gitignore — adding .sf/ would fail (now gitignored).
|
||||
git add .gitignore 2>/dev/null || true
|
||||
else
|
||||
git add .sf/ .gitignore 2>/dev/null || true
|
||||
fi
|
||||
STAGED_COUNT="$(git diff --cached --name-only -- '.sf/' .gitignore | wc -l | tr -d ' ')"
|
||||
ok "${STAGED_COUNT} file(s) staged and ready to commit."
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Summary ──────────────────────────────────────────────────────────────────
|
||||
|
||||
section "── Summary ──────────────────────────────────────────────────────────────"
|
||||
|
||||
if $DRY_RUN; then
|
||||
echo -e "${YELLOW}Dry-run complete. Re-run without --dry-run to apply changes.${RESET}"
|
||||
else
|
||||
FINAL_STAGED="$(git diff --cached --name-only -- '.sf/' .gitignore 2>/dev/null | wc -l | tr -d ' ')"
|
||||
if (( FINAL_STAGED > 0 )); then
|
||||
echo -e "${GREEN}Recovery complete. Commit with:${RESET}"
|
||||
echo ""
|
||||
if $SF_IS_SYMLINK; then
|
||||
echo " git commit -m \"fix: clean stale .sf/ index entries after external-state migration\""
|
||||
else
|
||||
echo " git commit -m \"fix: restore .sf/ files deleted by #1364 regression\""
|
||||
fi
|
||||
echo ""
|
||||
echo "Staged files:"
|
||||
git diff --cached --name-only -- '.sf/' .gitignore | head -20 | while IFS= read -r f; do
|
||||
echo " + $f"
|
||||
done
|
||||
TOTAL_STAGED="$(git diff --cached --name-only -- '.sf/' .gitignore | wc -l | tr -d ' ')"
|
||||
if (( TOTAL_STAGED > 20 )); then
|
||||
echo " ... and $((TOTAL_STAGED - 20)) more"
|
||||
fi
|
||||
else
|
||||
ok "Repo is healthy — no recovery needed."
|
||||
fi
|
||||
fi
|
||||
446
scripts/recover-sf-1668.sh
Executable file
446
scripts/recover-sf-1668.sh
Executable file
|
|
@ -0,0 +1,446 @@
|
|||
#!/usr/bin/env bash
|
||||
# recover-sf-1668.sh — Recovery script for issue #1668 (Linux / macOS)
|
||||
#
|
||||
# SF v2.39.x deleted the milestone branch and worktree directory when a
|
||||
# merge failed due to the repo using `master` as its default branch (not
|
||||
# `main`). The commits were never merged — they are orphaned in the git
|
||||
# object store and can be recovered via git reflog or git fsck.
|
||||
#
|
||||
# This script:
|
||||
# 1. Searches git reflog for the deleted milestone branch (fastest path)
|
||||
# 2. Falls back to git fsck --unreachable to find orphaned commits
|
||||
# 3. Ranks candidates by recency and SF commit message patterns
|
||||
# 4. Creates a recovery branch at the identified commit
|
||||
# 5. Reports what was found and how to complete the merge manually
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/recover-sf-1668.sh [--milestone <ID>] [--dry-run] [--auto]
|
||||
#
|
||||
# Options:
|
||||
# --milestone <ID> SF milestone ID (e.g. M001-g2nalq).
|
||||
# When omitted the script scans all recent orphans.
|
||||
# --dry-run Show what would be done without making any changes.
|
||||
# --auto Pick the best candidate automatically (no prompts).
|
||||
#
|
||||
# Requirements: git >= 2.23, bash >= 4.x
|
||||
#
|
||||
# Affected versions: SF.39.x
|
||||
# Fixed in: SF.40.1 (PR #1669)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ─── Colours ──────────────────────────────────────────────────────────────────
|
||||
|
||||
RED='\033[0;31m'
|
||||
YELLOW='\033[1;33m'
|
||||
GREEN='\033[0;32m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
DIM='\033[2m'
|
||||
RESET='\033[0m'
|
||||
|
||||
# ─── Args ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
DRY_RUN=false
|
||||
AUTO=false
|
||||
MILESTONE_ID=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--dry-run) DRY_RUN=true; shift ;;
|
||||
--auto) AUTO=true; shift ;;
|
||||
--milestone)
|
||||
[[ $# -lt 2 ]] && { echo "Error: --milestone requires an argument" >&2; exit 1; }
|
||||
MILESTONE_ID="$2"; shift 2 ;;
|
||||
--milestone=*)
|
||||
MILESTONE_ID="${1#--milestone=}"; shift ;;
|
||||
-h|--help)
|
||||
sed -n '2,/^set -/p' "$0" | grep '^#' | sed 's/^# \{0,1\}//'
|
||||
exit 0 ;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
echo "Usage: $0 [--milestone <ID>] [--dry-run] [--auto]" >&2
|
||||
exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
info() { echo -e "${CYAN}[info]${RESET} $*"; }
|
||||
ok() { echo -e "${GREEN}[ok]${RESET} $*"; }
|
||||
warn() { echo -e "${YELLOW}[warn]${RESET} $*"; }
|
||||
error() { echo -e "${RED}[error]${RESET} $*" >&2; }
|
||||
section() { echo -e "\n${BOLD}$*${RESET}"; }
|
||||
dim() { echo -e "${DIM}$*${RESET}"; }
|
||||
|
||||
die() {
|
||||
error "$*"
|
||||
exit 1
|
||||
}
|
||||
|
||||
run() {
|
||||
if $DRY_RUN; then
|
||||
echo -e " ${YELLOW}(dry-run)${RESET} $*"
|
||||
else
|
||||
eval "$*"
|
||||
fi
|
||||
}
|
||||
|
||||
# ─── Preflight ────────────────────────────────────────────────────────────────
|
||||
|
||||
section "── Preflight ───────────────────────────────────────────────────────────"
|
||||
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
die "Not inside a git repository. Run this from your project root."
|
||||
fi
|
||||
|
||||
REPO_ROOT="$(git rev-parse --show-toplevel)"
|
||||
cd "$REPO_ROOT"
|
||||
info "Repo root: $REPO_ROOT"
|
||||
|
||||
$DRY_RUN && warn "DRY-RUN mode — no changes will be made."
|
||||
|
||||
# ─── Step 1: Confirm the milestone branch is gone ─────────────────────────────
|
||||
|
||||
section "── Step 1: Verify milestone branch is missing ───────────────────────────"
|
||||
|
||||
BRANCH_PATTERN="milestone/"
|
||||
if [[ -n "$MILESTONE_ID" ]]; then
|
||||
BRANCH_PATTERN="milestone/${MILESTONE_ID}"
|
||||
fi
|
||||
|
||||
LIVE_BRANCHES="$(git branch | grep "$BRANCH_PATTERN" 2>/dev/null | tr -d '* ' || true)"
|
||||
|
||||
if [[ -n "$LIVE_BRANCHES" ]]; then
|
||||
ok "Found live milestone branch(es):"
|
||||
echo "$LIVE_BRANCHES" | while IFS= read -r b; do echo " $b"; done
|
||||
echo ""
|
||||
warn "The branch still exists — are you sure it was lost?"
|
||||
echo " If you want to check out existing work: git checkout ${LIVE_BRANCHES%%$'\n'*}"
|
||||
echo " To merge it manually: git checkout master && git merge --squash ${LIVE_BRANCHES%%$'\n'*}"
|
||||
echo ""
|
||||
echo "Re-run with --milestone <ID> to force scanning for a specific orphaned commit."
|
||||
if [[ -z "$MILESTONE_ID" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "$MILESTONE_ID" && -n "$LIVE_BRANCHES" ]]; then
|
||||
warn "Milestone branch milestone/${MILESTONE_ID} is still live — continuing scan anyway."
|
||||
elif [[ -n "$MILESTONE_ID" ]]; then
|
||||
info "Confirmed: milestone/${MILESTONE_ID} branch is gone."
|
||||
else
|
||||
info "No live milestone/ branches found — scanning for orphaned commits."
|
||||
fi
|
||||
|
||||
# ─── Step 2: Search git reflog (fastest, most reliable) ───────────────────────
|
||||
|
||||
section "── Step 2: Search git reflog for deleted branch ────────────────────────"
|
||||
|
||||
# git reflog stores branch moves and deletions in .git/logs/refs/heads/
|
||||
# It is retained for 90 days by default (gc.reflogExpire).
|
||||
REFLOG_FOUND_SHA=""
|
||||
REFLOG_FOUND_BRANCH=""
|
||||
|
||||
if [[ -n "$MILESTONE_ID" ]]; then
|
||||
REFLOG_PATH="${REPO_ROOT}/.git/logs/refs/heads/milestone/${MILESTONE_ID}"
|
||||
if [[ -f "$REFLOG_PATH" ]]; then
|
||||
# Last line of the reflog for this branch is the most recent tip
|
||||
REFLOG_FOUND_SHA="$(tail -1 "$REFLOG_PATH" | awk '{print $2}')"
|
||||
REFLOG_FOUND_BRANCH="milestone/${MILESTONE_ID}"
|
||||
ok "Reflog entry found for milestone/${MILESTONE_ID} — commit: ${REFLOG_FOUND_SHA:0:12}"
|
||||
else
|
||||
info "No reflog file at .git/logs/refs/heads/milestone/${MILESTONE_ID}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Also try git reflog (in-memory index, works without the raw file)
|
||||
if [[ -z "$REFLOG_FOUND_SHA" ]]; then
|
||||
info "Scanning git reflog for milestone/ commits..."
|
||||
REFLOG_MILESTONES="$(git reflog --all --format="%H %gs" 2>/dev/null \
|
||||
| grep -E "(checkout|commit|merge).*milestone/" \
|
||||
| head -20 || true)"
|
||||
|
||||
if [[ -n "$REFLOG_MILESTONES" ]]; then
|
||||
info "Found milestone-related reflog entries:"
|
||||
echo "$REFLOG_MILESTONES" | while IFS= read -r line; do
|
||||
dim " $line"
|
||||
done
|
||||
# Extract the most recent SHA from the most relevant entry
|
||||
if [[ -n "$MILESTONE_ID" ]]; then
|
||||
MATCH="$(echo "$REFLOG_MILESTONES" | grep "milestone/${MILESTONE_ID}" | head -1 || true)"
|
||||
else
|
||||
MATCH="$(echo "$REFLOG_MILESTONES" | head -1 || true)"
|
||||
fi
|
||||
if [[ -n "$MATCH" ]]; then
|
||||
REFLOG_FOUND_SHA="$(echo "$MATCH" | awk '{print $1}')"
|
||||
REFLOG_FOUND_BRANCH="$(echo "$MATCH" | grep -oE 'milestone/[^ ]+' | head -1 || echo "milestone/unknown")"
|
||||
fi
|
||||
else
|
||||
info "No milestone/ entries in reflog."
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Step 3: Fall back to git fsck if reflog didn't find it ───────────────────
|
||||
|
||||
section "── Step 3: Scan for orphaned (unreachable) commits ───────────────────"
|
||||
|
||||
FSCK_CANDIDATES=()
|
||||
FSCK_CANDIDATE_MSGS=()
|
||||
FSCK_CANDIDATE_DATES=()
|
||||
FSCK_CANDIDATE_FILES=()
|
||||
|
||||
if [[ -z "$REFLOG_FOUND_SHA" ]]; then
|
||||
info "Running git fsck --unreachable (this may take a moment)..."
|
||||
|
||||
# Collect all unreachable commit hashes
|
||||
UNREACHABLE_COMMITS="$(git fsck --unreachable --no-reflogs 2>/dev/null \
|
||||
| grep '^unreachable commit' \
|
||||
| awk '{print $3}' || true)"
|
||||
|
||||
if [[ -z "$UNREACHABLE_COMMITS" ]]; then
|
||||
# Try without --no-reflogs as a fallback (less conservative)
|
||||
UNREACHABLE_COMMITS="$(git fsck --unreachable 2>/dev/null \
|
||||
| grep '^unreachable commit' \
|
||||
| awk '{print $3}' || true)"
|
||||
fi
|
||||
|
||||
TOTAL="$(echo "$UNREACHABLE_COMMITS" | grep -c . || true)"
|
||||
info "Found ${TOTAL} unreachable commit object(s)."
|
||||
|
||||
if [[ -z "$UNREACHABLE_COMMITS" || "$TOTAL" -eq 0 ]]; then
|
||||
error "No unreachable commits found."
|
||||
echo ""
|
||||
echo "This means one of:"
|
||||
echo " 1. git gc has already been run and the objects were pruned"
|
||||
echo " (objects are pruned after 14 days by default)"
|
||||
echo " 2. The commits were never written to the object store"
|
||||
echo " 3. The wrong repository is being scanned"
|
||||
echo ""
|
||||
echo "If git gc ran, the objects may be unrecoverable without a backup."
|
||||
echo "Try: git reflog --all | grep milestone"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Score each unreachable commit — rank by recency and SF message patterns.
|
||||
# SF milestone commits look like: "feat(M001-g2nalq): <title>"
|
||||
# Slice merges look like: "feat(M001-g2nalq/S01): <slice>"
|
||||
#
|
||||
# Performance: use a single `git log --no-walk=unsorted --stdin` call to
|
||||
# read all commit metadata in one pass instead of one `git show` per commit.
|
||||
CUTOFF="$(date -d '30 days ago' '+%s' 2>/dev/null || date -v-30d '+%s' 2>/dev/null || echo 0)"
|
||||
WEEK_AGO="$(date -d '7 days ago' '+%s' 2>/dev/null || date -v-7d '+%s' 2>/dev/null || echo 0)"
|
||||
|
||||
# Batch-read all commits: output format per commit is:
|
||||
# HASH<TAB>UNIX_TIMESTAMP<TAB>ISO_DATE<TAB>SUBJECT
|
||||
# separated by NUL so multi-line subjects don't break parsing.
|
||||
BATCH_LOG="$(echo "$UNREACHABLE_COMMITS" \
|
||||
| git log --no-walk=unsorted --stdin --format=$'%H\t%ct\t%ci\t%s' 2>/dev/null || true)"
|
||||
|
||||
while IFS=$'\t' read -r sha commit_ts commit_date_hr commit_msg; do
|
||||
[[ -z "$sha" ]] && continue
|
||||
[[ -z "$commit_ts" || "$commit_ts" -lt "$CUTOFF" ]] && continue
|
||||
|
||||
# Score: milestone pattern in subject is highest signal
|
||||
SCORE=0
|
||||
if [[ -n "$MILESTONE_ID" ]] && echo "$commit_msg" | grep -qiE "(milestone[/ ])?${MILESTONE_ID}"; then
|
||||
SCORE=$((SCORE + 100))
|
||||
fi
|
||||
if echo "$commit_msg" | grep -qE '^feat\([A-Z][0-9]+'; then
|
||||
SCORE=$((SCORE + 50))
|
||||
fi
|
||||
if echo "$commit_msg" | grep -qiE 'milestone/|complete-milestone|SF|slice'; then
|
||||
SCORE=$((SCORE + 20))
|
||||
fi
|
||||
if [[ "$commit_ts" -gt "$WEEK_AGO" ]]; then
|
||||
SCORE=$((SCORE + 10))
|
||||
fi
|
||||
|
||||
FSCK_CANDIDATES+=("$sha|$SCORE")
|
||||
FSCK_CANDIDATE_MSGS+=("$commit_msg")
|
||||
FSCK_CANDIDATE_DATES+=("$commit_date_hr")
|
||||
FSCK_CANDIDATE_FILES+=("?")
|
||||
done <<< "$BATCH_LOG"
|
||||
|
||||
if [[ ${#FSCK_CANDIDATES[@]} -eq 0 ]]; then
|
||||
error "No recent unreachable commits found within the last 30 days."
|
||||
echo ""
|
||||
echo "Objects may have been pruned by git gc, or the issue occurred more than 30 days ago."
|
||||
echo "Try: git fsck --unreachable --no-reflogs 2>/dev/null | grep commit"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Sort by score descending, keep top 10
|
||||
IFS=$'\n' SORTED_CANDIDATES=($(
|
||||
for i in "${!FSCK_CANDIDATES[@]}"; do
|
||||
echo "${FSCK_CANDIDATES[$i]}|$i"
|
||||
done | sort -t'|' -k2 -rn | head -10
|
||||
))
|
||||
unset IFS
|
||||
|
||||
info "Top candidates (scored by recency and SF message patterns):"
|
||||
echo ""
|
||||
NUM=1
|
||||
SORTED_IDXS=()
|
||||
for entry in "${SORTED_CANDIDATES[@]}"; do
|
||||
SHA="${entry%%|*}"
|
||||
IDX="${entry##*|}"
|
||||
SORTED_IDXS+=("$IDX")
|
||||
MSG="${FSCK_CANDIDATE_MSGS[$IDX]}"
|
||||
DATE="${FSCK_CANDIDATE_DATES[$IDX]}"
|
||||
FILES="${FSCK_CANDIDATE_FILES[$IDX]}"
|
||||
echo -e " ${BOLD}${NUM})${RESET} ${sha:0:12} ${GREEN}${MSG}${RESET}"
|
||||
echo -e " ${DIM}${DATE} — ${FILES}${RESET}"
|
||||
NUM=$((NUM + 1))
|
||||
done
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# ─── Step 4: Select the recovery commit ───────────────────────────────────────
|
||||
|
||||
section "── Step 4: Select recovery commit ──────────────────────────────────────"
|
||||
|
||||
RECOVERY_SHA=""
|
||||
RECOVERY_SOURCE=""
|
||||
|
||||
if [[ -n "$REFLOG_FOUND_SHA" ]]; then
|
||||
RECOVERY_SHA="$REFLOG_FOUND_SHA"
|
||||
RECOVERY_SOURCE="reflog (${REFLOG_FOUND_BRANCH})"
|
||||
info "Using reflog candidate: ${RECOVERY_SHA:0:12}"
|
||||
MSG="$(git show -s --format="%s %ci" "$RECOVERY_SHA" 2>/dev/null || echo "unknown")"
|
||||
dim " $MSG"
|
||||
|
||||
elif [[ ${#SORTED_IDXS[@]} -eq 1 ]] || $AUTO; then
|
||||
# Auto-select first (highest scored) candidate
|
||||
FIRST_ENTRY="${SORTED_CANDIDATES[0]}"
|
||||
FIRST_SHA="${FIRST_ENTRY%%|*}"
|
||||
FIRST_IDX="${FIRST_ENTRY##*|}"
|
||||
RECOVERY_SHA="$FIRST_SHA"
|
||||
RECOVERY_SOURCE="fsck (auto-selected)"
|
||||
info "Auto-selecting best candidate: ${RECOVERY_SHA:0:12}"
|
||||
|
||||
else
|
||||
# Prompt user to select
|
||||
echo -n "Select a candidate to recover [1-${#SORTED_CANDIDATES[@]}, or q to quit]: "
|
||||
read -r SELECTION
|
||||
|
||||
if [[ "$SELECTION" == "q" ]]; then
|
||||
info "Aborted."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if ! [[ "$SELECTION" =~ ^[0-9]+$ ]] || \
|
||||
[[ "$SELECTION" -lt 1 ]] || \
|
||||
[[ "$SELECTION" -gt ${#SORTED_CANDIDATES[@]} ]]; then
|
||||
die "Invalid selection: $SELECTION"
|
||||
fi
|
||||
|
||||
SEL_IDX=$((SELECTION - 1))
|
||||
SEL_ENTRY="${SORTED_CANDIDATES[$SEL_IDX]}"
|
||||
RECOVERY_SHA="${SEL_ENTRY%%|*}"
|
||||
RECOVERY_SOURCE="fsck (user-selected #${SELECTION})"
|
||||
fi
|
||||
|
||||
if [[ -z "$RECOVERY_SHA" ]]; then
|
||||
die "Could not determine a recovery commit. See output above."
|
||||
fi
|
||||
|
||||
ok "Recovery commit: ${RECOVERY_SHA:0:16} (source: ${RECOVERY_SOURCE})"
|
||||
|
||||
# Show what's in this commit
|
||||
echo ""
|
||||
info "Commit details:"
|
||||
git show -s --format=" Message: %s%n Author: %an <%ae>%n Date: %ci%n Full SHA: %H" "$RECOVERY_SHA"
|
||||
echo ""
|
||||
info "Files at this commit (first 30):"
|
||||
git show --stat --format="" "$RECOVERY_SHA" 2>/dev/null | head -30
|
||||
echo ""
|
||||
|
||||
# ─── Step 5: Create recovery branch ───────────────────────────────────────────
|
||||
|
||||
section "── Step 5: Create recovery branch ──────────────────────────────────────"
|
||||
|
||||
# Determine recovery branch name
|
||||
if [[ -n "$MILESTONE_ID" ]]; then
|
||||
RECOVERY_BRANCH="recovery/1668/${MILESTONE_ID}"
|
||||
elif [[ -n "$REFLOG_FOUND_BRANCH" ]]; then
|
||||
CLEAN_NAME="${REFLOG_FOUND_BRANCH//\//-}"
|
||||
RECOVERY_BRANCH="recovery/1668/${CLEAN_NAME}"
|
||||
else
|
||||
SHORT_SHA="${RECOVERY_SHA:0:8}"
|
||||
RECOVERY_BRANCH="recovery/1668/commit-${SHORT_SHA}"
|
||||
fi
|
||||
|
||||
# Check if it already exists
|
||||
if git show-ref --verify --quiet "refs/heads/${RECOVERY_BRANCH}" 2>/dev/null; then
|
||||
warn "Branch ${RECOVERY_BRANCH} already exists."
|
||||
if ! $AUTO; then
|
||||
echo -n "Overwrite it? [y/N]: "
|
||||
read -r ANSWER
|
||||
if [[ "$ANSWER" != "y" && "$ANSWER" != "Y" ]]; then
|
||||
info "Aborted. Existing branch preserved."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
run "git branch -D \"${RECOVERY_BRANCH}\""
|
||||
fi
|
||||
|
||||
run "git branch \"${RECOVERY_BRANCH}\" \"${RECOVERY_SHA}\""
|
||||
|
||||
if ! $DRY_RUN; then
|
||||
ok "Recovery branch created: ${RECOVERY_BRANCH}"
|
||||
else
|
||||
ok "(dry-run) Would create branch: ${RECOVERY_BRANCH} → ${RECOVERY_SHA:0:12}"
|
||||
fi
|
||||
|
||||
# ─── Step 6: Verify the recovery branch ───────────────────────────────────────
|
||||
|
||||
if ! $DRY_RUN; then
|
||||
section "── Step 6: Verify recovery branch ──────────────────────────────────────"
|
||||
|
||||
FILE_LIST="$(git ls-tree -r --name-only "${RECOVERY_BRANCH}" 2>/dev/null | grep -v '^\.sf/' || true)"
|
||||
FILE_COUNT="$(echo "$FILE_LIST" | grep -c . || true)"
|
||||
|
||||
info "Files recoverable (excluding .sf/ state files): ${FILE_COUNT}"
|
||||
echo "$FILE_LIST" | head -30 | while IFS= read -r f; do echo " $f"; done
|
||||
if [[ "$FILE_COUNT" -gt 30 ]]; then
|
||||
dim " ... and $((FILE_COUNT - 30)) more"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ─── Summary ──────────────────────────────────────────────────────────────────
|
||||
|
||||
section "── Recovery Summary ─────────────────────────────────────────────────────"
|
||||
|
||||
if $DRY_RUN; then
|
||||
echo -e "${YELLOW}Dry-run complete. Re-run without --dry-run to apply.${RESET}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
DEFAULT_BRANCH="$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's|refs/remotes/origin/||' \
|
||||
|| git for-each-ref --format='%(refname:short)' 'refs/heads/main' 'refs/heads/master' 2>/dev/null | head -1 \
|
||||
|| git branch --show-current)"
|
||||
|
||||
echo -e "${GREEN}Recovery branch ready: ${BOLD}${RECOVERY_BRANCH}${RESET}"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo ""
|
||||
echo -e " ${BOLD}1. Inspect the recovered files:${RESET}"
|
||||
echo " git checkout ${RECOVERY_BRANCH}"
|
||||
echo " ls -la"
|
||||
echo ""
|
||||
echo -e " ${BOLD}2. Verify your code is intact:${RESET}"
|
||||
echo " git log --oneline ${RECOVERY_BRANCH} | head -20"
|
||||
echo " git show --stat ${RECOVERY_BRANCH}"
|
||||
echo ""
|
||||
echo -e " ${BOLD}3. Merge to your default branch (${DEFAULT_BRANCH}):${RESET}"
|
||||
echo " git checkout ${DEFAULT_BRANCH}"
|
||||
echo " git merge --squash ${RECOVERY_BRANCH}"
|
||||
echo " git commit -m \"feat: recover milestone from #1668\""
|
||||
echo ""
|
||||
echo -e " ${BOLD}4. Clean up after verifying:${RESET}"
|
||||
echo " git branch -D ${RECOVERY_BRANCH}"
|
||||
echo ""
|
||||
echo -e "${DIM}Note: update SF to v2.40.1+ to prevent this from recurring.${RESET}"
|
||||
echo " PR: https://github.com/singularity-forge/sf-run/pull/1669"
|
||||
echo ""
|
||||
|
|
@ -6,7 +6,7 @@ import { join, dirname } from 'node:path'
|
|||
import { mkdirSync, mkdtempSync, rmSync, writeFileSync } from 'node:fs'
|
||||
|
||||
function getManagedRtkPath() {
|
||||
return join(homedir(), '.gsd', 'agent', 'bin', process.platform === 'win32' ? 'rtk.exe' : 'rtk')
|
||||
return join(homedir(), '.sf', 'agent', 'bin', process.platform === 'win32' ? 'rtk.exe' : 'rtk')
|
||||
}
|
||||
|
||||
function run(command, args, options = {}) {
|
||||
|
|
@ -29,7 +29,7 @@ function createFixture(projectDir) {
|
|||
mkdirSync(join(projectDir, 'src', 'components'), { recursive: true })
|
||||
|
||||
writeFileSync(join(projectDir, 'package.json'), JSON.stringify({
|
||||
name: 'gsd-rtk-benchmark',
|
||||
name: 'sf-rtk-benchmark',
|
||||
version: '1.0.0',
|
||||
scripts: {
|
||||
test: 'node test.js',
|
||||
|
|
@ -114,7 +114,7 @@ function main() {
|
|||
throw new Error('RTK binary path not resolved')
|
||||
}
|
||||
|
||||
const workspace = mkdtempSync(join(tmpdir(), 'gsd-rtk-benchmark-'))
|
||||
const workspace = mkdtempSync(join(tmpdir(), 'sf-rtk-benchmark-'))
|
||||
const homeDir = join(workspace, 'home')
|
||||
const projectDir = join(workspace, 'project')
|
||||
mkdirSync(homeDir, { recursive: true })
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ function shouldScan(file) {
|
|||
lower.startsWith('node_modules/') ||
|
||||
lower.startsWith('dist/') ||
|
||||
lower.startsWith('coverage/') ||
|
||||
lower.startsWith('.gsd/')
|
||||
lower.startsWith('.sf/')
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -128,7 +128,7 @@ should_scan() {
|
|||
esac
|
||||
# Skip node_modules, dist, coverage
|
||||
case "$file" in
|
||||
node_modules/*|dist/*|coverage/*|.gsd/*)
|
||||
node_modules/*|dist/*|coverage/*|.sf/*)
|
||||
return 1 ;;
|
||||
esac
|
||||
return 0
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
* Sync pkg/package.json version with the installed @mariozechner/pi-coding-agent version.
|
||||
*
|
||||
* sf-run sets PI_PACKAGE_DIR=pkg/ so that pi's config.js reads piConfig from
|
||||
* pkg/package.json (for branding: name="gsd", configDir=".gsd"). However, config.js
|
||||
* pkg/package.json (for branding: name="sf", configDir=".sf"). However, config.js
|
||||
* also reads `version` from that same file and uses it for the update check
|
||||
* (comparing against npm registry). If pkg/package.json has a stale version,
|
||||
* pi's update banner fires even when the user is already on the latest release.
|
||||
|
|
|
|||
|
|
@ -150,8 +150,8 @@ try {
|
|||
console.log('==> Verifying @sf-run/* workspace package resolution...');
|
||||
const installedRoot = join(installDir, 'node_modules', 'sf-run');
|
||||
const criticalPackages = [
|
||||
{ scope: '@gsd', name: 'pi-coding-agent' },
|
||||
{ scope: '@gsd-build', name: 'rpc-client' },
|
||||
{ scope: '@sf', name: 'pi-coding-agent' },
|
||||
{ scope: '@sf-build', name: 'rpc-client' },
|
||||
];
|
||||
let resolutionFailed = false;
|
||||
for (const pkg of criticalPackages) {
|
||||
|
|
@ -174,7 +174,7 @@ try {
|
|||
console.log(' @sf-run/* packages are resolvable.');
|
||||
|
||||
// --- Run the binary to confirm end-to-end resolution ---
|
||||
console.log('==> Running installed binary (gsd -v)...');
|
||||
console.log('==> Running installed binary (sf -v)...');
|
||||
const loaderPath = join(installedRoot, 'dist', 'loader.js');
|
||||
const bundledWorkflowMcpCliPath = join(installedRoot, 'packages', 'mcp-server', 'dist', 'cli.js');
|
||||
if (!existsSync(bundledWorkflowMcpCliPath)) {
|
||||
|
|
@ -190,13 +190,13 @@ try {
|
|||
timeout: 15000,
|
||||
maxBuffer: DEFAULT_MAX_BUFFER,
|
||||
}).trim();
|
||||
console.log(` gsd -v => ${versionOutput}`);
|
||||
console.log(` sf -v => ${versionOutput}`);
|
||||
if (!versionOutput.match(/^\d+\.\d+\.\d+/)) {
|
||||
console.log('ERROR: gsd -v returned unexpected output (expected a version string).');
|
||||
console.log('ERROR: sf -v returned unexpected output (expected a version string).');
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (err) {
|
||||
console.log('ERROR: Running gsd -v failed after install.');
|
||||
console.log('ERROR: Running sf -v failed after install.');
|
||||
if (err.stdout) console.log(err.stdout);
|
||||
if (err.stderr) console.log(err.stderr);
|
||||
process.exit(1);
|
||||
|
|
|
|||
|
|
@ -10,13 +10,13 @@ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
|||
ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$ROOT"
|
||||
|
||||
# --- Guard: workspace packages must not have @gsd/* cross-deps ---
|
||||
echo "==> Checking workspace packages for @gsd/* cross-deps..."
|
||||
# --- Guard: workspace packages must not have @sf/* cross-deps ---
|
||||
echo "==> Checking workspace packages for @sf/* cross-deps..."
|
||||
CROSS_FAILED=0
|
||||
for ws_pkg in native pi-agent-core pi-ai pi-coding-agent pi-tui; do
|
||||
RESULT=$(node -e "
|
||||
const pkg = require('./packages/${ws_pkg}/package.json');
|
||||
const deps = Object.keys(pkg.dependencies || {}).filter(d => d.startsWith('@gsd/'));
|
||||
const deps = Object.keys(pkg.dependencies || {}).filter(d => d.startsWith('@sf/'));
|
||||
if (deps.length) { console.log(deps.join(', ')); process.exit(1); }
|
||||
" 2>&1) || {
|
||||
echo " LEAKED in ${ws_pkg}: $RESULT"
|
||||
|
|
@ -25,11 +25,11 @@ for ws_pkg in native pi-agent-core pi-ai pi-coding-agent pi-tui; do
|
|||
}
|
||||
done
|
||||
if [ "$CROSS_FAILED" = "1" ]; then
|
||||
echo "ERROR: Workspace packages have @gsd/* cross-dependencies."
|
||||
echo "ERROR: Workspace packages have @sf/* cross-dependencies."
|
||||
echo " These cause 404s when npm resolves them from the registry."
|
||||
exit 1
|
||||
fi
|
||||
echo " No @gsd/* cross-dependencies."
|
||||
echo " No @sf/* cross-dependencies."
|
||||
|
||||
# --- Pack tarball ---
|
||||
echo "==> Packing tarball..."
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
# S04 verification — npm pack tarball install smoke test
|
||||
# Checks: dist integrity, SF_BUNDLED_EXTENSION_PATHS, prepublishOnly,
|
||||
# npm pack dry-run, tarball install, binary exists, launch (no extension
|
||||
# errors, "gsd" branding), ~/.gsd/ untouched, non-TTY warning/no exit 1.
|
||||
# errors, "sf" branding), ~/.sf/ untouched, non-TTY warning/no exit 1.
|
||||
|
||||
set -uo pipefail
|
||||
|
||||
|
|
@ -10,11 +10,11 @@ FAIL=0
|
|||
pass() { echo " PASS: $1"; }
|
||||
fail() { echo " FAIL: $1"; FAIL=1; }
|
||||
|
||||
SMOKE_PREFIX=/tmp/gsd-smoke-prefix
|
||||
SMOKE_PREFIX=/tmp/sf-smoke-prefix
|
||||
TARBALL=""
|
||||
|
||||
# Capture ~/.gsd/agent/sessions/ count before any smoke runs (for Check 9)
|
||||
PI_SESSIONS_BEFORE=$(ls ~/.gsd/agent/sessions/ 2>/dev/null | wc -l | tr -d ' ')
|
||||
# Capture ~/.sf/agent/sessions/ count before any smoke runs (for Check 9)
|
||||
PI_SESSIONS_BEFORE=$(ls ~/.sf/agent/sessions/ 2>/dev/null | wc -l | tr -d ' ')
|
||||
|
||||
cleanup() {
|
||||
rm -rf "$SMOKE_PREFIX"
|
||||
|
|
@ -106,7 +106,7 @@ echo "--- tarball pack ---"
|
|||
# ----------------------------------------------------------------
|
||||
# Note: prepublishOnly triggers a build here (expected).
|
||||
npm pack --silent 2>/dev/null || npm pack 2>&1 | tail -5
|
||||
TARBALL=$(ls glittercowboy-gsd-*.tgz 2>/dev/null | head -1 || true)
|
||||
TARBALL=$(ls glittercowboy-sf-*.tgz 2>/dev/null | head -1 || true)
|
||||
if [ -n "$TARBALL" ] && [ -f "$TARBALL" ]; then
|
||||
pass "5 — tarball produced: $TARBALL"
|
||||
else
|
||||
|
|
@ -134,10 +134,10 @@ fi
|
|||
# ----------------------------------------------------------------
|
||||
# Check 7 — binary exists at expected path after install
|
||||
# ----------------------------------------------------------------
|
||||
if [ -f "$SMOKE_PREFIX/bin/gsd" ] || [ -L "$SMOKE_PREFIX/bin/gsd" ]; then
|
||||
pass "7 — $SMOKE_PREFIX/bin/gsd exists after install"
|
||||
if [ -f "$SMOKE_PREFIX/bin/sf" ] || [ -L "$SMOKE_PREFIX/bin/sf" ]; then
|
||||
pass "7 — $SMOKE_PREFIX/bin/sf exists after install"
|
||||
else
|
||||
fail "7 — $SMOKE_PREFIX/bin/gsd not found after install"
|
||||
fail "7 — $SMOKE_PREFIX/bin/sf not found after install"
|
||||
ls -la "$SMOKE_PREFIX/bin/" 2>/dev/null || echo " (bin/ dir does not exist)"
|
||||
fi
|
||||
|
||||
|
|
@ -145,14 +145,14 @@ echo ""
|
|||
echo "--- launch smoke ---"
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# Check 8 — launch: "gsd" branding + zero extension load errors
|
||||
# Check 8 — launch: "sf" branding + zero extension load errors
|
||||
# Use background kill pattern (macOS has no GNU timeout).
|
||||
# Allow 8s for extensions to load.
|
||||
# ----------------------------------------------------------------
|
||||
smoke_out=$(mktemp)
|
||||
(
|
||||
env -i HOME="$HOME" PATH="$PATH" \
|
||||
"$SMOKE_PREFIX/bin/gsd" < /dev/null > "$smoke_out" 2>&1
|
||||
"$SMOKE_PREFIX/bin/sf" < /dev/null > "$smoke_out" 2>&1
|
||||
) &
|
||||
smoke_pid=$!
|
||||
sleep 8
|
||||
|
|
@ -162,7 +162,7 @@ wait "$smoke_pid" 2>/dev/null || true
|
|||
ext_errors=$(grep "Extension load error" "$smoke_out" 2>/dev/null | wc -l | tr -d ' ')
|
||||
# Strip ANSI escape codes for branding check
|
||||
plain_out=$(sed 's/\x1b\[[0-9;]*m//g' "$smoke_out" 2>/dev/null || cat "$smoke_out")
|
||||
has_gsd=$(echo "$plain_out" | grep -qi "gsd\|get shit done" && echo "yes" || echo "no")
|
||||
has_gsd=$(echo "$plain_out" | grep -qi "sf\|get shit done" && echo "yes" || echo "no")
|
||||
|
||||
if [ "$ext_errors" -eq 0 ]; then
|
||||
pass "8a — zero Extension load errors on launch"
|
||||
|
|
@ -172,31 +172,31 @@ else
|
|||
fi
|
||||
|
||||
if [ "$has_gsd" = "yes" ]; then
|
||||
pass "8b — \"gsd\" / \"get shit done\" branding found in launch output"
|
||||
pass "8b — \"sf\" / \"get shit done\" branding found in launch output"
|
||||
else
|
||||
# Fallback: check if binary self-identifies differently (not "pi")
|
||||
has_pi_only=$(echo "$plain_out" | grep -qi "^pi\b" && echo "yes" || echo "no")
|
||||
if [ "$has_pi_only" = "no" ]; then
|
||||
pass "8b — output does not show \"pi\" branding (gsd branding likely in ANSI sequences)"
|
||||
pass "8b — output does not show \"pi\" branding (sf branding likely in ANSI sequences)"
|
||||
else
|
||||
fail "8b — output shows \"pi\" branding instead of \"gsd\""
|
||||
fail "8b — output shows \"pi\" branding instead of \"sf\""
|
||||
head -5 "$smoke_out" | sed 's/^/ /'
|
||||
fi
|
||||
fi
|
||||
rm -f "$smoke_out"
|
||||
|
||||
echo ""
|
||||
echo "--- ~/.gsd/ isolation ---"
|
||||
echo "--- ~/.sf/ isolation ---"
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# Check 9 — ~/.gsd/ session count unchanged before/after smoke run
|
||||
# Check 9 — ~/.sf/ session count unchanged before/after smoke run
|
||||
# PI_SESSIONS_BEFORE captured at script start (before any binary invocation).
|
||||
# ----------------------------------------------------------------
|
||||
pi_after=$(ls ~/.gsd/agent/sessions/ 2>/dev/null | wc -l | tr -d ' ')
|
||||
pi_after=$(ls ~/.sf/agent/sessions/ 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [ "$PI_SESSIONS_BEFORE" = "$pi_after" ]; then
|
||||
pass "9 — ~/.gsd/agent/sessions/ count unchanged (${pi_after} sessions before and after)"
|
||||
pass "9 — ~/.sf/agent/sessions/ count unchanged (${pi_after} sessions before and after)"
|
||||
else
|
||||
fail "9 — ~/.gsd/agent/sessions/ count changed: was ${PI_SESSIONS_BEFORE}, now ${pi_after}"
|
||||
fail "9 — ~/.sf/agent/sessions/ count changed: was ${PI_SESSIONS_BEFORE}, now ${pi_after}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
|
@ -211,7 +211,7 @@ exit10_tmp=$(mktemp)
|
|||
echo "" > "$exit10_tmp"
|
||||
(
|
||||
env -i HOME="$HOME" PATH="$PATH" \
|
||||
"$SMOKE_PREFIX/bin/gsd" < /dev/null > "$tmp10" 2>&1
|
||||
"$SMOKE_PREFIX/bin/sf" < /dev/null > "$tmp10" 2>&1
|
||||
echo "$?" > "$exit10_tmp"
|
||||
) &
|
||||
pid10=$!
|
||||
|
|
|
|||
|
|
@ -8,8 +8,8 @@
|
|||
*
|
||||
* This solves the `npm link` branch-drift problem: without dist/resources/,
|
||||
* `initResources()` reads from src/resources/ which changes with git branch
|
||||
* switches, causing stale extensions to be synced to ~/.gsd/agent/ for ALL
|
||||
* projects using gsd.
|
||||
* switches, causing stale extensions to be synced to ~/.sf/agent/ for ALL
|
||||
* projects using sf.
|
||||
*/
|
||||
|
||||
import { watch } from 'node:fs'
|
||||
|
|
|
|||
215
sf-orchestrator/SKILL.md
Normal file
215
sf-orchestrator/SKILL.md
Normal file
|
|
@ -0,0 +1,215 @@
|
|||
---
|
||||
name: sf-orchestrator
|
||||
description: >
|
||||
Build software products autonomously via SF headless mode. Handles the full
|
||||
lifecycle: write a spec, launch a build, poll for completion, handle blockers,
|
||||
track costs, and verify the result. Use when asked to "build something",
|
||||
"create a project", "run sf", "check build status", or any task that
|
||||
requires autonomous software development via subprocess.
|
||||
metadata:
|
||||
openclaw:
|
||||
requires:
|
||||
bins: [sf]
|
||||
install:
|
||||
kind: node
|
||||
package: sf-run
|
||||
bins: [sf]
|
||||
---
|
||||
|
||||
<objective>
|
||||
You are an autonomous agent that builds software by orchestrating SF as a subprocess.
|
||||
SF is a headless CLI that plans, codes, tests, and ships software from a spec.
|
||||
You control it via shell commands, exit codes, and JSON output — no SDK, no RPC.
|
||||
</objective>
|
||||
|
||||
<mental_model>
|
||||
SF headless is a subprocess you launch and monitor. Think of it like a junior developer
|
||||
you hand a spec to:
|
||||
|
||||
1. You write the spec (what to build)
|
||||
2. You launch the build (`sf headless ... new-milestone --context spec.md --auto`)
|
||||
3. You wait for it to finish (exit code tells you the outcome)
|
||||
4. You check the result (query state, inspect files, verify deliverables)
|
||||
5. If blocked, you intervene (steer, supply answers, or escalate)
|
||||
|
||||
The subprocess handles all planning, coding, testing, and git commits internally.
|
||||
You never write application code yourself — SF does that.
|
||||
</mental_model>
|
||||
|
||||
<critical_rules>
|
||||
- **Flags before command.** `sf headless [--flags] [command] [args]`. Flags after the command are ignored.
|
||||
- **Redirect stderr.** JSON output goes to stdout. Progress goes to stderr. Always `2>/dev/null` when parsing JSON.
|
||||
- **Check exit codes.** 0=success, 1=error, 10=blocked (needs you), 11=cancelled.
|
||||
- **Use `query` to poll.** Instant (~50ms), no LLM cost. Use it between steps, not `auto` for status.
|
||||
- **Budget awareness.** Track `cost.total` from query results. Set limits before launching long runs.
|
||||
- **One project directory per build.** Each SF project needs its own directory with a `.sf/` folder.
|
||||
</critical_rules>
|
||||
|
||||
<routing>
|
||||
Route based on what you need to do:
|
||||
|
||||
**Build something from scratch:**
|
||||
Read `workflows/build-from-spec.md` — write spec, init directory, launch, monitor, verify.
|
||||
|
||||
**Check on a running or completed build:**
|
||||
Read `workflows/monitor-and-poll.md` — query state, interpret phases, handle blockers.
|
||||
|
||||
**Execute with fine-grained control:**
|
||||
Read `workflows/step-by-step.md` — run one unit at a time with decision points.
|
||||
|
||||
**Understand the JSON output:**
|
||||
Read `references/json-result.md` — field reference for HeadlessJsonResult.
|
||||
|
||||
**Pre-supply answers or secrets:**
|
||||
Read `references/answer-injection.md` — answer file schema and injection mechanism.
|
||||
|
||||
**Look up a specific command:**
|
||||
Read `references/commands.md` — full command reference with flags and examples.
|
||||
</routing>
|
||||
|
||||
<quick_reference>
|
||||
|
||||
**Launch a full build (spec to working code):**
|
||||
```bash
|
||||
mkdir -p /tmp/my-project && cd /tmp/my-project && git init
|
||||
cat > spec.md << 'EOF'
|
||||
# Your Product Spec Here
|
||||
Build a ...
|
||||
EOF
|
||||
sf headless --output-format json --context spec.md new-milestone --auto 2>/dev/null
|
||||
```
|
||||
|
||||
**Check project state (instant, free):**
|
||||
```bash
|
||||
cd /path/to/project
|
||||
sf headless query | jq '{phase: .state.phase, progress: .state.progress, cost: .cost.total}'
|
||||
```
|
||||
|
||||
**Resume work on an existing project:**
|
||||
```bash
|
||||
cd /path/to/project
|
||||
sf headless --output-format json auto 2>/dev/null
|
||||
```
|
||||
|
||||
**Run one step at a time:**
|
||||
```bash
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
echo "$RESULT" | jq '{status: .status, phase: .phase, cost: .cost.total}'
|
||||
```
|
||||
|
||||
</quick_reference>
|
||||
|
||||
<exit_codes>
|
||||
| Code | Meaning | Your action |
|
||||
|------|---------|-------------|
|
||||
| `0` | Success | Check deliverables, verify output, report completion |
|
||||
| `1` | Error or timeout | Inspect stderr, check `.sf/STATE.md`, retry or escalate |
|
||||
| `10` | Blocked | Query state for blocker details, steer around it or escalate to human |
|
||||
| `11` | Cancelled | Process was interrupted — resume with `--resume <sessionId>` or restart |
|
||||
</exit_codes>
|
||||
|
||||
<project_structure>
|
||||
SF creates and manages all state in `.sf/`:
|
||||
```
|
||||
.sf/
|
||||
PROJECT.md # What this project is
|
||||
REQUIREMENTS.md # Capability contract
|
||||
DECISIONS.md # Architectural decisions (append-only)
|
||||
KNOWLEDGE.md # Persistent project knowledge (patterns, rules, lessons)
|
||||
STATE.md # Current phase and next action
|
||||
milestones/
|
||||
M001-xxxxx/
|
||||
M001-xxxxx-CONTEXT.md # Scope, constraints, assumptions
|
||||
M001-xxxxx-ROADMAP.md # Slices with checkboxes
|
||||
M001-xxxxx-SUMMARY.md # Completion summary
|
||||
slices/S01/
|
||||
S01-PLAN.md # Tasks
|
||||
S01-SUMMARY.md # Slice summary
|
||||
tasks/
|
||||
T01-PLAN.md # Individual task spec
|
||||
T01-SUMMARY.md # Task completion summary
|
||||
```
|
||||
|
||||
State is derived from files on disk — checkboxes in ROADMAP.md and PLAN.md are the source of truth for completion. You never need to edit these files. SF manages them. But you can read them to understand progress.
|
||||
</project_structure>
|
||||
|
||||
<flags>
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--output-format <fmt>` | `text` (default), `json` (structured result at exit), `stream-json` (JSONL events) |
|
||||
| `--json` | Alias for `--output-format stream-json` — JSONL event stream to stdout |
|
||||
| `--bare` | Skip CLAUDE.md, AGENTS.md, user settings, user skills. Use for CI/ecosystem runs. |
|
||||
| `--resume <id>` | Resume a prior headless session by its session ID |
|
||||
| `--timeout N` | Overall timeout in ms (default: 300000, use 0 to disable) |
|
||||
| `--model ID` | Override LLM model |
|
||||
| `--supervised` | Forward interactive UI requests to orchestrator via stdout/stdin |
|
||||
| `--response-timeout N` | Timeout (ms) for orchestrator response in supervised mode (default: 30000) |
|
||||
| `--answers <path>` | Pre-supply answers and secrets from JSON file |
|
||||
| `--events <types>` | Filter JSONL to specific event types (comma-separated, implies `--json`) |
|
||||
| `--verbose` | Show tool calls in progress output |
|
||||
| `--context <path>` | Spec file path for `new-milestone` (use `-` for stdin) |
|
||||
| `--context-text <text>` | Inline spec text for `new-milestone` |
|
||||
| `--auto` | Chain into auto-mode after `new-milestone` |
|
||||
</flags>
|
||||
|
||||
<answer_injection>
|
||||
Pre-supply answers and secrets for fully autonomous runs:
|
||||
|
||||
```bash
|
||||
sf headless --answers answers.json --output-format json auto 2>/dev/null
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"questions": { "question_id": "selected_option" },
|
||||
"secrets": { "API_KEY": "sk-..." },
|
||||
"defaults": { "strategy": "first_option" }
|
||||
}
|
||||
```
|
||||
|
||||
- **questions** — question ID to answer (string for single-select, string[] for multi-select)
|
||||
- **secrets** — env var to value, injected into child process environment
|
||||
- **defaults.strategy** — `"first_option"` (default) or `"cancel"` for unmatched questions
|
||||
|
||||
See `references/answer-injection.md` for the full mechanism.
|
||||
</answer_injection>
|
||||
|
||||
<event_streaming>
|
||||
For real-time monitoring, use JSONL event streaming:
|
||||
|
||||
```bash
|
||||
sf headless --json auto 2>/dev/null | while read -r line; do
|
||||
TYPE=$(echo "$line" | jq -r '.type')
|
||||
case "$TYPE" in
|
||||
tool_execution_start) echo "Tool: $(echo "$line" | jq -r '.toolName')" ;;
|
||||
extension_ui_request) echo "SF: $(echo "$line" | jq -r '.message // .title // empty')" ;;
|
||||
agent_end) echo "Session ended" ;;
|
||||
esac
|
||||
done
|
||||
```
|
||||
|
||||
Filter to specific events: `--events agent_end,execution_complete,extension_ui_request`
|
||||
|
||||
Available types: `agent_start`, `agent_end`, `tool_execution_start`, `tool_execution_end`,
|
||||
`tool_execution_update`, `extension_ui_request`, `message_start`, `message_end`,
|
||||
`message_update`, `turn_start`, `turn_end`, `cost_update`, `execution_complete`.
|
||||
</event_streaming>
|
||||
|
||||
<all_commands>
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `auto` | Run all queued units until milestone complete or blocked (default) |
|
||||
| `next` | Run exactly one unit, then exit |
|
||||
| `query` | Instant JSON snapshot — state, next dispatch, costs (no LLM, ~50ms) |
|
||||
| `new-milestone` | Create milestone from spec file |
|
||||
| `dispatch <phase>` | Force specific phase (research, plan, execute, complete, reassess, uat, replan) |
|
||||
| `stop` / `pause` | Control auto-mode |
|
||||
| `steer <desc>` | Hard-steer plan mid-execution |
|
||||
| `skip` / `undo` | Unit control |
|
||||
| `queue` | Queue/reorder milestones |
|
||||
| `history` | View execution history |
|
||||
| `doctor` | Health check + auto-fix |
|
||||
| `knowledge <rule>` | Add persistent project knowledge |
|
||||
|
||||
See `references/commands.md` for the complete reference.
|
||||
</all_commands>
|
||||
119
sf-orchestrator/references/answer-injection.md
Normal file
119
sf-orchestrator/references/answer-injection.md
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
# Answer Injection
|
||||
|
||||
Pre-supply answers and secrets to eliminate interactive prompts during headless execution.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
sf headless --answers answers.json auto
|
||||
sf headless --answers answers.json new-milestone --context spec.md --auto
|
||||
```
|
||||
|
||||
The `--answers` flag takes a path to a JSON file containing pre-supplied answers and secrets.
|
||||
|
||||
## Answer File Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"questions": {
|
||||
"question_id": "selected_option_label",
|
||||
"multi_select_question": ["option_a", "option_b"]
|
||||
},
|
||||
"secrets": {
|
||||
"API_KEY": "sk-...",
|
||||
"DATABASE_URL": "postgres://..."
|
||||
},
|
||||
"defaults": {
|
||||
"strategy": "first_option"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `questions` | `Record<string, string \| string[]>` | Map question ID → answer. String for single-select, string array for multi-select. |
|
||||
| `secrets` | `Record<string, string>` | Map env var name → value. Injected into child process environment variables. |
|
||||
| `defaults.strategy` | `"first_option" \| "cancel"` | Fallback for unmatched questions. Default: `"first_option"`. |
|
||||
|
||||
## How Secrets Work
|
||||
|
||||
Secrets are injected as environment variables into the SF child process:
|
||||
|
||||
1. The orchestrator passes the answer file via `--answers`
|
||||
2. SF reads the file and sets secret values as env vars in the child process
|
||||
3. When `secure_env_collect` runs inside the agent, it finds the keys already in `process.env`
|
||||
4. The tool skips the interactive prompt and reports the keys as "already configured"
|
||||
|
||||
Secrets are never logged or included in event streams.
|
||||
|
||||
## How Question Matching Works
|
||||
|
||||
Two-phase correlation:
|
||||
|
||||
1. **Observe** — SF monitors `tool_execution_start` events for `ask_user_questions` to extract question metadata (ID, options, allowMultiple)
|
||||
2. **Match** — Subsequent `extension_ui_request` events are correlated to the metadata and responded to with the pre-supplied answer
|
||||
|
||||
Handles out-of-order events (extension_ui_request can arrive before tool_execution_start) via a deferred processing queue with 500ms timeout.
|
||||
|
||||
## Coexistence with `--supervised`
|
||||
|
||||
Both `--answers` and `--supervised` can be active simultaneously. Priority order:
|
||||
|
||||
1. Answer injector tries first
|
||||
2. If no answer found, supervised mode forwards to the orchestrator
|
||||
3. If no orchestrator response within `--response-timeout`, the auto-responder kicks in
|
||||
|
||||
## Without Answer Injection
|
||||
|
||||
Headless mode has built-in auto-responders for all prompt types:
|
||||
|
||||
| Prompt Type | Default Behavior |
|
||||
|-------------|-----------------|
|
||||
| Select | Picks first option |
|
||||
| Confirm | Auto-confirms |
|
||||
| Input | Empty string |
|
||||
| Editor | Returns prefill or empty |
|
||||
|
||||
Answer injection overrides these defaults with specific answers when precision matters.
|
||||
|
||||
## Diagnostics
|
||||
|
||||
The injector tracks statistics printed in the session summary:
|
||||
|
||||
| Stat | Description |
|
||||
|------|-------------|
|
||||
| `questionsAnswered` | Questions resolved from the answer file |
|
||||
| `questionsDefaulted` | Questions handled by the default strategy |
|
||||
| `secretsProvided` | Number of secrets injected |
|
||||
|
||||
Unused question IDs and secret keys are warned about at exit.
|
||||
|
||||
## Example: Orchestrator with Answers
|
||||
|
||||
```bash
|
||||
# Create answer file
|
||||
cat > answers.json << 'EOF'
|
||||
{
|
||||
"questions": {
|
||||
"test_framework": "vitest",
|
||||
"package_manager": "pnpm"
|
||||
},
|
||||
"secrets": {
|
||||
"OPENAI_API_KEY": "sk-...",
|
||||
"DATABASE_URL": "postgres://localhost:5432/mydb"
|
||||
},
|
||||
"defaults": {
|
||||
"strategy": "first_option"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Run with pre-supplied answers
|
||||
sf headless --answers answers.json --output-format json auto 2>/dev/null
|
||||
|
||||
# Parse result
|
||||
RESULT=$(sf headless --answers answers.json --output-format json next 2>/dev/null)
|
||||
echo "$RESULT" | jq '{status: .status, cost: .cost.total}'
|
||||
```
|
||||
210
sf-orchestrator/references/commands.md
Normal file
210
sf-orchestrator/references/commands.md
Normal file
|
|
@ -0,0 +1,210 @@
|
|||
# SF Commands Reference
|
||||
|
||||
All commands run as subprocesses via `sf headless [flags] [command] [args...]`.
|
||||
|
||||
## Global Flags
|
||||
|
||||
These flags apply to any `sf headless` invocation:
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--output-format <fmt>` | `text` (default), `json` (structured result), `stream-json` (JSONL) |
|
||||
| `--json` | Alias for `--output-format stream-json` |
|
||||
| `--bare` | Minimal context: skip CLAUDE.md, AGENTS.md, user settings, user skills |
|
||||
| `--resume <id>` | Resume a prior headless session by ID |
|
||||
| `--timeout N` | Overall timeout in ms (default: 300000) |
|
||||
| `--model ID` | Override LLM model |
|
||||
| `--supervised` | Forward interactive UI requests to orchestrator via stdout/stdin |
|
||||
| `--response-timeout N` | Timeout for orchestrator response in supervised mode (default: 30000ms) |
|
||||
| `--answers <path>` | Pre-supply answers and secrets from JSON file |
|
||||
| `--events <types>` | Filter JSONL output to specific event types (comma-separated, implies `--json`) |
|
||||
| `--verbose` | Show tool calls in progress output |
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning | When |
|
||||
|------|---------|------|
|
||||
| `0` | Success | Unit/milestone completed normally |
|
||||
| `1` | Error or timeout | Runtime error, LLM failure, or `--timeout` exceeded |
|
||||
| `10` | Blocked | Execution hit a blocker requiring human intervention |
|
||||
| `11` | Cancelled | User or orchestrator cancelled the operation |
|
||||
|
||||
## Workflow Commands
|
||||
|
||||
### `auto` (default)
|
||||
|
||||
Autonomous mode — loop through all pending units until milestone complete or blocked.
|
||||
|
||||
```bash
|
||||
sf headless --output-format json auto
|
||||
```
|
||||
|
||||
### `next`
|
||||
|
||||
Step mode — execute exactly one unit (task/slice/milestone step), then exit. Recommended for orchestrators that need decision points between steps.
|
||||
|
||||
```bash
|
||||
sf headless --output-format json next
|
||||
```
|
||||
|
||||
### `new-milestone`
|
||||
|
||||
Create a milestone from a specification document.
|
||||
|
||||
```bash
|
||||
sf headless new-milestone --context spec.md
|
||||
sf headless new-milestone --context spec.md --auto
|
||||
sf headless new-milestone --context-text "Build a REST API" --auto
|
||||
cat spec.md | sf headless new-milestone --context - --auto
|
||||
```
|
||||
|
||||
Extra flags:
|
||||
- `--context <path>` — path to spec/PRD file (use `-` for stdin)
|
||||
- `--context-text <text>` — inline specification text
|
||||
- `--auto` — start auto-mode after milestone creation
|
||||
|
||||
### `dispatch <phase>`
|
||||
|
||||
Force-route to a specific phase, bypassing normal state-machine routing.
|
||||
|
||||
```bash
|
||||
sf headless dispatch research
|
||||
sf headless dispatch plan
|
||||
sf headless dispatch execute
|
||||
sf headless dispatch complete
|
||||
sf headless dispatch reassess
|
||||
sf headless dispatch uat
|
||||
sf headless dispatch replan
|
||||
```
|
||||
|
||||
### `discuss`
|
||||
|
||||
Start guided milestone/slice discussion.
|
||||
|
||||
```bash
|
||||
sf headless discuss
|
||||
```
|
||||
|
||||
### `stop`
|
||||
|
||||
Stop auto-mode gracefully.
|
||||
|
||||
```bash
|
||||
sf headless stop
|
||||
```
|
||||
|
||||
### `pause`
|
||||
|
||||
Pause auto-mode (preserves state, resumable).
|
||||
|
||||
```bash
|
||||
sf headless pause
|
||||
```
|
||||
|
||||
## State Inspection
|
||||
|
||||
### `query`
|
||||
|
||||
**Instant JSON snapshot** — state, next dispatch, parallel costs. No LLM, ~50ms. The recommended way for orchestrators to inspect state.
|
||||
|
||||
```bash
|
||||
sf headless query
|
||||
sf headless query | jq '.state.phase'
|
||||
sf headless query | jq '.next'
|
||||
sf headless query | jq '.cost.total'
|
||||
```
|
||||
|
||||
### `status`
|
||||
|
||||
Progress dashboard (TUI overlay — useful interactively, not for parsing).
|
||||
|
||||
```bash
|
||||
sf headless status
|
||||
```
|
||||
|
||||
### `history`
|
||||
|
||||
Execution history. Supports `--cost`, `--phase`, `--model`, and `limit` arguments.
|
||||
|
||||
```bash
|
||||
sf headless history
|
||||
```
|
||||
|
||||
## Unit Control
|
||||
|
||||
### `skip`
|
||||
|
||||
Prevent a unit from auto-mode dispatch.
|
||||
|
||||
```bash
|
||||
sf headless skip
|
||||
```
|
||||
|
||||
### `undo`
|
||||
|
||||
Revert last completed unit. Use `--force` to bypass confirmation.
|
||||
|
||||
```bash
|
||||
sf headless undo
|
||||
sf headless undo --force
|
||||
```
|
||||
|
||||
### `steer <description>`
|
||||
|
||||
Hard-steer plan documents during execution. Useful for mid-course corrections.
|
||||
|
||||
```bash
|
||||
sf headless steer "Skip the blocked dependency, use mock instead"
|
||||
```
|
||||
|
||||
### `queue`
|
||||
|
||||
Queue and reorder future milestones.
|
||||
|
||||
```bash
|
||||
sf headless queue
|
||||
```
|
||||
|
||||
## Configuration & Health
|
||||
|
||||
### `doctor`
|
||||
|
||||
Runtime health checks with auto-fix.
|
||||
|
||||
```bash
|
||||
sf headless doctor
|
||||
```
|
||||
|
||||
### `prefs`
|
||||
|
||||
Manage preferences (global/project/status/wizard/setup).
|
||||
|
||||
```bash
|
||||
sf headless prefs
|
||||
```
|
||||
|
||||
### `knowledge <rule|pattern|lesson>`
|
||||
|
||||
Add persistent project knowledge.
|
||||
|
||||
```bash
|
||||
sf headless knowledge "Always use UTC timestamps in API responses"
|
||||
```
|
||||
|
||||
## Phases
|
||||
|
||||
SF workflows progress through these phases:
|
||||
|
||||
```
|
||||
pre-planning → needs-discussion → discussing → researching → planning →
|
||||
executing → verifying → summarizing → advancing → validating-milestone →
|
||||
completing-milestone → complete
|
||||
```
|
||||
|
||||
Special phases: `paused`, `blocked`, `replanning-slice`
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- **Milestone**: Shippable version (4–10 slices, 1–4 weeks)
|
||||
- **Slice**: One demoable vertical capability (1–7 tasks, 1–3 days)
|
||||
- **Task**: One context-window-sized unit of work (one session)
|
||||
162
sf-orchestrator/references/json-result.md
Normal file
162
sf-orchestrator/references/json-result.md
Normal file
|
|
@ -0,0 +1,162 @@
|
|||
# HeadlessJsonResult Reference
|
||||
|
||||
When using `--output-format json`, SF collects events silently and emits a single `HeadlessJsonResult` JSON object to stdout at process exit. This is the structured result for orchestrator decision-making.
|
||||
|
||||
## Obtaining the Result
|
||||
|
||||
```bash
|
||||
# Capture the JSON result
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
# Parse fields with jq
|
||||
echo "$RESULT" | jq '.status'
|
||||
echo "$RESULT" | jq '.cost.total'
|
||||
echo "$RESULT" | jq '.nextAction'
|
||||
```
|
||||
|
||||
**Important:** Progress text goes to stderr. The JSON result goes to stdout. Redirect stderr to `/dev/null` when parsing stdout.
|
||||
|
||||
## Field Reference
|
||||
|
||||
### Top-Level Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `status` | `"success" \| "error" \| "blocked" \| "cancelled" \| "timeout"` | Final session status. Maps directly to exit codes. |
|
||||
| `exitCode` | `number` | Process exit code: `0` (success), `1` (error/timeout), `10` (blocked), `11` (cancelled). |
|
||||
| `sessionId` | `string \| undefined` | Session identifier. Pass to `--resume <id>` to continue this session. |
|
||||
| `duration` | `number` | Session wall-clock duration in milliseconds. |
|
||||
| `cost` | `CostObject` | Token usage and cost breakdown. See below. |
|
||||
| `toolCalls` | `number` | Total number of tool calls made during the session. |
|
||||
| `events` | `number` | Total number of events processed during the session. |
|
||||
| `milestone` | `string \| undefined` | Active milestone ID (e.g. `"M001"`). |
|
||||
| `phase` | `string \| undefined` | Current SF phase at session end (e.g. `"executing"`, `"blocked"`, `"complete"`). |
|
||||
| `nextAction` | `string \| undefined` | Recommended next action from the state machine (e.g. `"dispatch"`, `"complete"`). |
|
||||
| `artifacts` | `string[] \| undefined` | Paths to artifacts created or modified during the session. |
|
||||
| `commits` | `string[] \| undefined` | Git commit SHAs created during the session. |
|
||||
|
||||
### Status → Exit Code Mapping
|
||||
|
||||
| Status | Exit Code | Constant | Meaning |
|
||||
|--------|-----------|----------|---------|
|
||||
| `success` | `0` | `EXIT_SUCCESS` | Unit or milestone completed successfully |
|
||||
| `error` | `1` | `EXIT_ERROR` | Runtime error or LLM failure |
|
||||
| `timeout` | `1` | `EXIT_ERROR` | `--timeout` deadline exceeded |
|
||||
| `blocked` | `10` | `EXIT_BLOCKED` | Execution blocked — needs human intervention |
|
||||
| `cancelled` | `11` | `EXIT_CANCELLED` | Cancelled by user or orchestrator |
|
||||
|
||||
### Cost Object
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `cost.total` | `number` | Total cost in USD for the session. |
|
||||
| `cost.input_tokens` | `number` | Number of input tokens consumed. |
|
||||
| `cost.output_tokens` | `number` | Number of output tokens generated. |
|
||||
| `cost.cache_read_tokens` | `number` | Number of tokens served from prompt cache. |
|
||||
| `cost.cache_write_tokens` | `number` | Number of tokens written to prompt cache. |
|
||||
|
||||
## Parsing Patterns
|
||||
|
||||
### Decision-Making After Each Step
|
||||
|
||||
```bash
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
case $EXIT in
|
||||
0)
|
||||
PHASE=$(echo "$RESULT" | jq -r '.phase')
|
||||
NEXT=$(echo "$RESULT" | jq -r '.nextAction')
|
||||
echo "Success — phase: $PHASE, next: $NEXT"
|
||||
;;
|
||||
1)
|
||||
STATUS=$(echo "$RESULT" | jq -r '.status')
|
||||
echo "Failed — status: $STATUS"
|
||||
;;
|
||||
10)
|
||||
echo "Blocked — needs intervention"
|
||||
sf headless query | jq '.state'
|
||||
;;
|
||||
11)
|
||||
echo "Cancelled"
|
||||
;;
|
||||
esac
|
||||
```
|
||||
|
||||
### Cost Tracking
|
||||
|
||||
```bash
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
|
||||
COST=$(echo "$RESULT" | jq -r '.cost.total')
|
||||
INPUT=$(echo "$RESULT" | jq -r '.cost.input_tokens')
|
||||
OUTPUT=$(echo "$RESULT" | jq -r '.cost.output_tokens')
|
||||
|
||||
echo "Cost: \$$COST (${INPUT} in / ${OUTPUT} out)"
|
||||
```
|
||||
|
||||
### Session Resumption
|
||||
|
||||
```bash
|
||||
# First run — capture session ID
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
SESSION_ID=$(echo "$RESULT" | jq -r '.sessionId')
|
||||
|
||||
# Resume the same session later
|
||||
sf headless --resume "$SESSION_ID" --output-format json next 2>/dev/null
|
||||
```
|
||||
|
||||
### Artifact Collection
|
||||
|
||||
```bash
|
||||
RESULT=$(sf headless --output-format json auto 2>/dev/null)
|
||||
|
||||
# List files created/modified
|
||||
echo "$RESULT" | jq -r '.artifacts[]?'
|
||||
|
||||
# List commits made
|
||||
echo "$RESULT" | jq -r '.commits[]?'
|
||||
```
|
||||
|
||||
## Example Result
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"exitCode": 0,
|
||||
"sessionId": "abc123def456",
|
||||
"duration": 45200,
|
||||
"cost": {
|
||||
"total": 0.42,
|
||||
"input_tokens": 15000,
|
||||
"output_tokens": 3500,
|
||||
"cache_read_tokens": 8000,
|
||||
"cache_write_tokens": 2000
|
||||
},
|
||||
"toolCalls": 12,
|
||||
"events": 87,
|
||||
"milestone": "M001",
|
||||
"phase": "executing",
|
||||
"nextAction": "dispatch",
|
||||
"artifacts": [
|
||||
".sf/milestones/M001/slices/S01/tasks/T01-SUMMARY.md"
|
||||
],
|
||||
"commits": [
|
||||
"a1b2c3d"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Combined with `query` for Full Picture
|
||||
|
||||
The `HeadlessJsonResult` captures what happened during a session. Use `query` for the current project state:
|
||||
|
||||
```bash
|
||||
# What happened in this step?
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
echo "$RESULT" | jq '{status, cost: .cost.total, phase}'
|
||||
|
||||
# What's the overall project state now?
|
||||
sf headless query | jq '{phase: .state.phase, progress: .state.progress, totalCost: .cost.total}'
|
||||
```
|
||||
20
sf-orchestrator/templates/spec.md
Normal file
20
sf-orchestrator/templates/spec.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# [Product Name]
|
||||
|
||||
## What
|
||||
[One paragraph: what this product does. Be concrete — "A CLI tool that converts CSV files to JSON" not "A data transformation solution".]
|
||||
|
||||
## Requirements
|
||||
- [User can DO something specific and observable]
|
||||
- [User can DO another specific thing]
|
||||
- [System DOES something automatically]
|
||||
- [Error case: system handles X gracefully]
|
||||
|
||||
## Technical Constraints
|
||||
- Language: [Node.js / Python / Go / Rust / etc.]
|
||||
- Framework: [Express / FastAPI / none / etc.]
|
||||
- External dependencies: [list APIs, databases, services]
|
||||
- Environment: [Node >= 22 / Python 3.12+ / etc.]
|
||||
|
||||
## Out of Scope
|
||||
- [Explicit exclusion 1 — prevents scope creep]
|
||||
- [Explicit exclusion 2]
|
||||
184
sf-orchestrator/workflows/build-from-spec.md
Normal file
184
sf-orchestrator/workflows/build-from-spec.md
Normal file
|
|
@ -0,0 +1,184 @@
|
|||
# Build From Spec
|
||||
|
||||
End-to-end workflow: take a product idea or specification, produce working software.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `sf` CLI installed (`npm install -g sf-run`)
|
||||
- A directory for the project (can be empty)
|
||||
- Git initialized in the directory
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Prepare the project directory
|
||||
|
||||
```bash
|
||||
PROJECT_DIR="/tmp/my-project-name"
|
||||
mkdir -p "$PROJECT_DIR"
|
||||
cd "$PROJECT_DIR"
|
||||
git init 2>/dev/null # SF needs a git repo
|
||||
```
|
||||
|
||||
### Step 2: Write the spec file
|
||||
|
||||
Write a spec file that describes what to build. More detail = better results.
|
||||
|
||||
```bash
|
||||
cat > spec.md << 'SPEC'
|
||||
# Product Name
|
||||
|
||||
## What
|
||||
[Concrete description of what to build]
|
||||
|
||||
## Requirements
|
||||
- [Specific, testable requirement 1]
|
||||
- [Specific, testable requirement 2]
|
||||
- [Specific, testable requirement 3]
|
||||
|
||||
## Technical Constraints
|
||||
- [Language, framework, or platform requirements]
|
||||
- [External services or APIs involved]
|
||||
- [Performance or security requirements]
|
||||
|
||||
## Out of Scope
|
||||
- [Things explicitly NOT included]
|
||||
SPEC
|
||||
```
|
||||
|
||||
**Spec quality matters.** Vague specs produce vague results. Include:
|
||||
- What the user can DO when it's done (not what code to write)
|
||||
- Technical constraints (language, framework, Node version)
|
||||
- What's out of scope (prevents scope creep)
|
||||
|
||||
### Step 3: Launch the build
|
||||
|
||||
**Fire-and-forget (simplest — SF does everything):**
|
||||
```bash
|
||||
cd "$PROJECT_DIR"
|
||||
RESULT=$(sf headless --output-format json --timeout 0 --context spec.md new-milestone --auto 2>/dev/null)
|
||||
EXIT=$?
|
||||
```
|
||||
|
||||
`--timeout 0` disables the timeout for long builds. `--auto` chains milestone creation into execution.
|
||||
|
||||
**With budget limit:**
|
||||
```bash
|
||||
# Use step-by-step mode with budget checks instead of auto
|
||||
# See workflows/step-by-step.md
|
||||
```
|
||||
|
||||
**For CI or ecosystem runs (no user config):**
|
||||
```bash
|
||||
RESULT=$(sf headless --bare --output-format json --timeout 0 --context spec.md new-milestone --auto 2>/dev/null)
|
||||
EXIT=$?
|
||||
```
|
||||
|
||||
### Step 4: Handle the result
|
||||
|
||||
```bash
|
||||
case $EXIT in
|
||||
0)
|
||||
# Success — verify deliverables
|
||||
STATUS=$(echo "$RESULT" | jq -r '.status')
|
||||
COST=$(echo "$RESULT" | jq -r '.cost.total')
|
||||
COMMITS=$(echo "$RESULT" | jq -r '.commits | length')
|
||||
echo "Build complete: $STATUS, cost: \$$COST, commits: $COMMITS"
|
||||
|
||||
# Inspect what was built
|
||||
sf headless query | jq '.state.progress'
|
||||
|
||||
# Check the actual files
|
||||
ls -la "$PROJECT_DIR"
|
||||
;;
|
||||
1)
|
||||
# Error — inspect and decide
|
||||
echo "Build failed"
|
||||
echo "$RESULT" | jq '{status: .status, phase: .phase}'
|
||||
|
||||
# Check state for details
|
||||
sf headless query | jq '.state'
|
||||
;;
|
||||
10)
|
||||
# Blocked — needs intervention
|
||||
echo "Build blocked — needs human input"
|
||||
sf headless query | jq '{phase: .state.phase, blockers: .state.blockers}'
|
||||
|
||||
# Options: steer, supply answers, or escalate
|
||||
# See workflows/monitor-and-poll.md for blocker handling
|
||||
;;
|
||||
11)
|
||||
echo "Build was cancelled"
|
||||
;;
|
||||
esac
|
||||
```
|
||||
|
||||
### Step 5: Verify deliverables
|
||||
|
||||
After a successful build, verify the output:
|
||||
|
||||
```bash
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Check project state
|
||||
sf headless query | jq '{
|
||||
phase: .state.phase,
|
||||
progress: .state.progress,
|
||||
cost: .cost.total
|
||||
}'
|
||||
|
||||
# Check git log for what was built
|
||||
git log --oneline
|
||||
|
||||
# Run the project's own tests if they exist
|
||||
[ -f package.json ] && npm test 2>/dev/null
|
||||
[ -f Makefile ] && make test 2>/dev/null
|
||||
```
|
||||
|
||||
## Complete Example
|
||||
|
||||
```bash
|
||||
# 1. Setup
|
||||
mkdir -p /tmp/todo-api && cd /tmp/todo-api && git init
|
||||
|
||||
# 2. Write spec
|
||||
cat > spec.md << 'SPEC'
|
||||
# Todo API
|
||||
|
||||
Build a REST API for managing todo items using Node.js and Express.
|
||||
|
||||
## Requirements
|
||||
- GET /todos — list all todos
|
||||
- POST /todos — create a todo (title, completed)
|
||||
- PUT /todos/:id — update a todo
|
||||
- DELETE /todos/:id — delete a todo
|
||||
- Todos stored in-memory (no database)
|
||||
- Input validation with descriptive error messages
|
||||
- Health check endpoint at GET /health
|
||||
|
||||
## Technical Constraints
|
||||
- Node.js with ESM modules
|
||||
- Express framework
|
||||
- No external database — in-memory array
|
||||
- Port configurable via PORT env var (default 3000)
|
||||
|
||||
## Out of Scope
|
||||
- Authentication
|
||||
- Persistent storage
|
||||
- Frontend
|
||||
SPEC
|
||||
|
||||
# 3. Launch
|
||||
RESULT=$(sf headless --output-format json --timeout 0 --context spec.md new-milestone --auto 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
# 4. Report
|
||||
if [ $EXIT -eq 0 ]; then
|
||||
COST=$(echo "$RESULT" | jq -r '.cost.total')
|
||||
echo "Build complete (\$$COST)"
|
||||
echo "Files created:"
|
||||
find . -not -path './.sf/*' -not -path './.git/*' -type f
|
||||
else
|
||||
echo "Build failed (exit $EXIT)"
|
||||
echo "$RESULT" | jq .
|
||||
fi
|
||||
```
|
||||
187
sf-orchestrator/workflows/monitor-and-poll.md
Normal file
187
sf-orchestrator/workflows/monitor-and-poll.md
Normal file
|
|
@ -0,0 +1,187 @@
|
|||
# Monitor and Poll
|
||||
|
||||
Check status of a SF project, handle blockers, track costs, and decide next actions.
|
||||
|
||||
## Checking Project State
|
||||
|
||||
The `query` command is your primary monitoring tool. It's instant (~50ms), costs nothing (no LLM), and returns the full project snapshot.
|
||||
|
||||
```bash
|
||||
cd /path/to/project
|
||||
sf headless query
|
||||
```
|
||||
|
||||
### Key fields to inspect
|
||||
|
||||
```bash
|
||||
# Overall status
|
||||
sf headless query | jq '{
|
||||
phase: .state.phase,
|
||||
milestone: .state.activeMilestone.id,
|
||||
slice: .state.activeSlice.id,
|
||||
task: .state.activeTask.id,
|
||||
progress: .state.progress,
|
||||
cost: .cost.total
|
||||
}'
|
||||
|
||||
# What should happen next
|
||||
sf headless query | jq '.next'
|
||||
# Returns: { "action": "dispatch", "unitType": "execute-task", "unitId": "M001/S01/T01" }
|
||||
|
||||
# Is it done?
|
||||
sf headless query | jq '.state.phase'
|
||||
# "complete" = done, "blocked" = needs you, anything else = in progress
|
||||
```
|
||||
|
||||
### Phase meanings
|
||||
|
||||
| Phase | Meaning | Your action |
|
||||
|-------|---------|-------------|
|
||||
| `pre-planning` | Milestone exists, no slices planned yet | Run `auto` or `next` |
|
||||
| `needs-discussion` | Ambiguities need resolution | Supply answers or run with defaults |
|
||||
| `discussing` | Discussion in progress | Wait |
|
||||
| `researching` | Codebase/library research | Wait |
|
||||
| `planning` | Creating task plans | Wait |
|
||||
| `executing` | Writing code | Wait |
|
||||
| `verifying` | Checking must-haves | Wait |
|
||||
| `summarizing` | Recording what happened | Wait |
|
||||
| `advancing` | Moving to next task/slice | Wait |
|
||||
| `evaluating-gates` | Quality checks before execution | Wait or run `next` |
|
||||
| `validating-milestone` | Final milestone checks | Wait |
|
||||
| `completing-milestone` | Archiving and cleanup | Wait |
|
||||
| `complete` | Done | Verify deliverables |
|
||||
| `blocked` | Needs human input | Handle blocker (see below) |
|
||||
| `paused` | Explicitly paused | Resume with `auto` |
|
||||
|
||||
## Handling Blockers
|
||||
|
||||
When exit code is `10` or phase is `blocked`:
|
||||
|
||||
```bash
|
||||
# 1. Understand the blocker
|
||||
sf headless query | jq '{phase: .state.phase, blockers: .state.blockers, nextAction: .state.nextAction}'
|
||||
|
||||
# 2. Option A: Steer around it
|
||||
sf headless steer "Skip the database dependency, use in-memory storage instead"
|
||||
|
||||
# 3. Option B: Supply pre-built answers
|
||||
cat > fix.json << 'EOF'
|
||||
{
|
||||
"questions": { "blocked_question_id": "workaround_option" },
|
||||
"defaults": { "strategy": "first_option" }
|
||||
}
|
||||
EOF
|
||||
sf headless --answers fix.json auto
|
||||
|
||||
# 4. Option C: Force a specific phase
|
||||
sf headless dispatch replan
|
||||
|
||||
# 5. Option D: Escalate to user
|
||||
echo "SF build blocked. Phase: $(sf headless query | jq -r '.state.phase')"
|
||||
echo "Manual intervention required."
|
||||
```
|
||||
|
||||
## Cost Tracking
|
||||
|
||||
```bash
|
||||
# Current cumulative cost
|
||||
sf headless query | jq '.cost.total'
|
||||
|
||||
# Per-worker breakdown
|
||||
sf headless query | jq '.cost.workers'
|
||||
|
||||
# After a step (from HeadlessJsonResult)
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
echo "$RESULT" | jq '.cost'
|
||||
```
|
||||
|
||||
### Budget enforcement pattern
|
||||
|
||||
```bash
|
||||
MAX_BUDGET=15.00
|
||||
|
||||
check_budget() {
|
||||
TOTAL=$(sf headless query | jq -r '.cost.total')
|
||||
OVER=$(echo "$TOTAL > $MAX_BUDGET" | bc -l)
|
||||
if [ "$OVER" = "1" ]; then
|
||||
echo "Budget exceeded: \$$TOTAL > \$$MAX_BUDGET"
|
||||
sf headless stop
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
```
|
||||
|
||||
## Poll-and-React Loop
|
||||
|
||||
For agents that need to periodically check on a build:
|
||||
|
||||
```bash
|
||||
cd /path/to/project
|
||||
|
||||
poll_project() {
|
||||
STATE=$(sf headless query 2>/dev/null)
|
||||
if [ -z "$STATE" ]; then
|
||||
echo "NO_PROJECT"
|
||||
return
|
||||
fi
|
||||
|
||||
PHASE=$(echo "$STATE" | jq -r '.state.phase')
|
||||
COST=$(echo "$STATE" | jq -r '.cost.total')
|
||||
PROGRESS=$(echo "$STATE" | jq -r '"\(.state.progress.milestones.done)/\(.state.progress.milestones.total) milestones, \(.state.progress.tasks.done)/\(.state.progress.tasks.total) tasks"')
|
||||
|
||||
case "$PHASE" in
|
||||
complete)
|
||||
echo "COMPLETE cost=\$$COST progress=$PROGRESS"
|
||||
;;
|
||||
blocked)
|
||||
BLOCKER=$(echo "$STATE" | jq -r '.state.nextAction // "unknown"')
|
||||
echo "BLOCKED reason=$BLOCKER cost=\$$COST"
|
||||
;;
|
||||
*)
|
||||
NEXT=$(echo "$STATE" | jq -r '.next.action // "none"')
|
||||
echo "IN_PROGRESS phase=$PHASE next=$NEXT cost=\$$COST progress=$PROGRESS"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
```
|
||||
|
||||
## Resuming Work
|
||||
|
||||
If a build was interrupted or you need to continue:
|
||||
|
||||
```bash
|
||||
cd /path/to/project
|
||||
|
||||
# Check current state
|
||||
sf headless query | jq '.state.phase'
|
||||
|
||||
# Resume from where it left off
|
||||
sf headless --output-format json auto 2>/dev/null
|
||||
|
||||
# Or resume a specific session
|
||||
sf headless --resume "$SESSION_ID" --output-format json auto 2>/dev/null
|
||||
```
|
||||
|
||||
## Reading Build Artifacts
|
||||
|
||||
After completion, inspect what SF produced:
|
||||
|
||||
```bash
|
||||
cd /path/to/project
|
||||
|
||||
# Project summary
|
||||
cat .sf/PROJECT.md
|
||||
|
||||
# What was decided
|
||||
cat .sf/DECISIONS.md
|
||||
|
||||
# Requirements and their validation status
|
||||
cat .sf/REQUIREMENTS.md
|
||||
|
||||
# Milestone summary
|
||||
cat .sf/milestones/M001-*/M001-*-SUMMARY.md 2>/dev/null
|
||||
|
||||
# Git history (SF commits per-slice)
|
||||
git log --oneline
|
||||
```
|
||||
156
sf-orchestrator/workflows/step-by-step.md
Normal file
156
sf-orchestrator/workflows/step-by-step.md
Normal file
|
|
@ -0,0 +1,156 @@
|
|||
# Step-by-Step Execution
|
||||
|
||||
Run SF one unit at a time with decision points between steps. Use this when you need
|
||||
control over execution — budget enforcement, progress reporting, conditional logic,
|
||||
or the ability to steer mid-build.
|
||||
|
||||
## When to use this vs `auto`
|
||||
|
||||
| Approach | Use when |
|
||||
|----------|----------|
|
||||
| `auto` | You trust the build, just want the result |
|
||||
| `next` loop | You need budget checks, progress updates, or intervention points |
|
||||
|
||||
## Core Loop
|
||||
|
||||
```bash
|
||||
cd /path/to/project
|
||||
MAX_BUDGET=20.00
|
||||
TOTAL_COST=0
|
||||
|
||||
while true; do
|
||||
# Run one unit
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
# Parse result
|
||||
STATUS=$(echo "$RESULT" | jq -r '.status')
|
||||
STEP_COST=$(echo "$RESULT" | jq -r '.cost.total')
|
||||
PHASE=$(echo "$RESULT" | jq -r '.phase // empty')
|
||||
SESSION_ID=$(echo "$RESULT" | jq -r '.sessionId // empty')
|
||||
|
||||
# Handle exit codes
|
||||
case $EXIT in
|
||||
0) ;; # success — continue
|
||||
1)
|
||||
echo "Step failed: $STATUS"
|
||||
break
|
||||
;;
|
||||
10)
|
||||
echo "Blocked — needs intervention"
|
||||
sf headless query | jq '.state'
|
||||
break
|
||||
;;
|
||||
11)
|
||||
echo "Cancelled"
|
||||
break
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check if milestone complete
|
||||
CURRENT_PHASE=$(sf headless query | jq -r '.state.phase')
|
||||
if [ "$CURRENT_PHASE" = "complete" ]; then
|
||||
TOTAL_COST=$(sf headless query | jq -r '.cost.total')
|
||||
echo "Milestone complete. Total cost: \$$TOTAL_COST"
|
||||
break
|
||||
fi
|
||||
|
||||
# Budget check
|
||||
TOTAL_COST=$(sf headless query | jq -r '.cost.total')
|
||||
OVER=$(echo "$TOTAL_COST > $MAX_BUDGET" | bc -l)
|
||||
if [ "$OVER" = "1" ]; then
|
||||
echo "Budget limit (\$$MAX_BUDGET) exceeded at \$$TOTAL_COST"
|
||||
sf headless stop
|
||||
break
|
||||
fi
|
||||
|
||||
# Progress report
|
||||
PROGRESS=$(sf headless query | jq -r '"\(.state.progress.tasks.done)/\(.state.progress.tasks.total) tasks"')
|
||||
echo "Step done ($STATUS). Phase: $CURRENT_PHASE, Progress: $PROGRESS, Cost: \$$TOTAL_COST"
|
||||
done
|
||||
```
|
||||
|
||||
## Step-by-Step with Spec Creation
|
||||
|
||||
Complete flow from idea to working code with full control:
|
||||
|
||||
```bash
|
||||
# 1. Setup
|
||||
PROJECT_DIR="/tmp/my-project"
|
||||
mkdir -p "$PROJECT_DIR" && cd "$PROJECT_DIR" && git init 2>/dev/null
|
||||
|
||||
# 2. Write spec
|
||||
cat > spec.md << 'SPEC'
|
||||
[Your spec here]
|
||||
SPEC
|
||||
|
||||
# 3. Create the milestone (planning only, no execution)
|
||||
RESULT=$(sf headless --output-format json --context spec.md new-milestone 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
if [ $EXIT -ne 0 ]; then
|
||||
echo "Milestone creation failed"
|
||||
echo "$RESULT" | jq .
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Milestone created. Starting execution..."
|
||||
|
||||
# 4. Execute step-by-step
|
||||
STEP=0
|
||||
while true; do
|
||||
STEP=$((STEP + 1))
|
||||
RESULT=$(sf headless --output-format json next 2>/dev/null)
|
||||
EXIT=$?
|
||||
|
||||
[ $EXIT -ne 0 ] && break
|
||||
|
||||
PHASE=$(sf headless query | jq -r '.state.phase')
|
||||
COST=$(sf headless query | jq -r '.cost.total')
|
||||
|
||||
echo "Step $STEP complete. Phase: $PHASE, Cost: \$$COST"
|
||||
|
||||
[ "$PHASE" = "complete" ] && break
|
||||
done
|
||||
|
||||
echo "Build finished in $STEP steps"
|
||||
```
|
||||
|
||||
## Intervention Patterns
|
||||
|
||||
### Steer mid-execution
|
||||
|
||||
If you detect the build going in the wrong direction:
|
||||
|
||||
```bash
|
||||
# Check what's happening
|
||||
sf headless query | jq '{phase: .state.phase, task: .state.activeTask}'
|
||||
|
||||
# Redirect
|
||||
sf headless steer "Use SQLite instead of PostgreSQL for storage"
|
||||
|
||||
# Continue
|
||||
sf headless --output-format json next 2>/dev/null
|
||||
```
|
||||
|
||||
### Skip a stuck unit
|
||||
|
||||
```bash
|
||||
sf headless skip
|
||||
sf headless --output-format json next 2>/dev/null
|
||||
```
|
||||
|
||||
### Undo last completed unit
|
||||
|
||||
```bash
|
||||
sf headless undo --force
|
||||
sf headless --output-format json next 2>/dev/null
|
||||
```
|
||||
|
||||
### Force a specific phase
|
||||
|
||||
```bash
|
||||
sf headless dispatch replan # Re-plan the current slice
|
||||
sf headless dispatch execute # Skip to execution
|
||||
sf headless dispatch uat # Jump to user acceptance testing
|
||||
```
|
||||
|
|
@ -17,7 +17,7 @@ export interface CliFlags {
|
|||
tools?: string[]
|
||||
messages: string[]
|
||||
web?: boolean
|
||||
/** Optional project path for web mode: `gsd --web <path>` or `gsd web start <path>` */
|
||||
/** Optional project path for web mode: `sf --web <path>` or `sf web start <path>` */
|
||||
webPath?: string
|
||||
/** Custom host to bind web server to: `--host 0.0.0.0` */
|
||||
webHost?: string
|
||||
|
|
@ -26,7 +26,7 @@ export interface CliFlags {
|
|||
/** Additional allowed origins for CORS: `--allowed-origins http://192.168.1.10:8080` */
|
||||
webAllowedOrigins?: string[]
|
||||
|
||||
/** Set by `gsd sessions` when the user picks a specific session to resume */
|
||||
/** Set by `sf sessions` when the user picks a specific session to resume */
|
||||
_selectedSessionPath?: string
|
||||
}
|
||||
|
||||
|
|
@ -203,7 +203,7 @@ export async function runWebCliBranch(
|
|||
flags: CliFlags,
|
||||
deps: RunWebCliBranchDeps = {},
|
||||
): Promise<RunWebCliBranchResult> {
|
||||
// Handle `gsd web stop [path|--all]` subcommand
|
||||
// Handle `sf web stop [path|--all]` subcommand
|
||||
if (flags.messages[0] === 'web' && flags.messages[1] === 'stop') {
|
||||
const stderr = deps.stderr ?? process.stderr
|
||||
const stopArg = flags.messages[2]
|
||||
|
|
@ -221,8 +221,8 @@ export async function runWebCliBranch(
|
|||
}
|
||||
}
|
||||
|
||||
// `gsd web [start] [path]` is an alias for `gsd --web [path]`
|
||||
// Matches: `gsd web`, `gsd web start`, `gsd web start <path>`, `gsd web <path>`
|
||||
// `sf web [start] [path]` is an alias for `sf --web [path]`
|
||||
// Matches: `sf web`, `sf web start`, `sf web start <path>`, `sf web <path>`
|
||||
const isWebSubcommand = flags.messages[0] === 'web' && flags.messages[1] !== 'stop'
|
||||
if (!flags.web && !isWebSubcommand) {
|
||||
return { handled: false }
|
||||
|
|
@ -232,9 +232,9 @@ export async function runWebCliBranch(
|
|||
const defaultCwd = (deps.cwd ?? (() => process.cwd()))()
|
||||
|
||||
// Resolve project path from multiple forms:
|
||||
// gsd --web <path> → flags.webPath
|
||||
// gsd web start <path> → messages[2]
|
||||
// gsd web <path> → messages[1] (when not "start")
|
||||
// sf --web <path> → flags.webPath
|
||||
// sf web start <path> → messages[2]
|
||||
// sf web <path> → messages[1] (when not "start")
|
||||
let webPath = flags.webPath
|
||||
if (!webPath && isWebSubcommand) {
|
||||
if (flags.messages[1] === 'start') {
|
||||
|
|
|
|||
14
src/cli.ts
14
src/cli.ts
|
|
@ -378,7 +378,7 @@ if (cliFlags.messages[0] === 'headless') {
|
|||
await ensureRtkBootstrap()
|
||||
// Sync bundled resources before headless runs (#3471). Without this,
|
||||
// headless-query loads from src/resources/ while auto/interactive load
|
||||
// from ~/.gsd/agent/extensions/ — different extension copies diverge.
|
||||
// from ~/.sf/agent/extensions/ — different extension copies diverge.
|
||||
initResources(agentDir)
|
||||
const { runHeadless, parseHeadlessArgs } = await import('./headless.js')
|
||||
await runHeadless(parseHeadlessArgs(process.argv))
|
||||
|
|
@ -558,7 +558,7 @@ if (isPrintMode) {
|
|||
markStartup('resourceLoader.reload')
|
||||
|
||||
// Print mode is a one-shot invocation. The --model flag is a transient
|
||||
// override (e.g. verification smoke tests like `gsd -p --model longcat/X "reply ok"`)
|
||||
// override (e.g. verification smoke tests like `sf -p --model longcat/X "reply ok"`)
|
||||
// and MUST NOT mutate the persisted defaultProvider/defaultModel in settings.json (#4251).
|
||||
// We disable persistence at session construction so every downstream path
|
||||
// (setModel override, fallback reapply, validation repair) is gated in one place.
|
||||
|
|
@ -611,7 +611,7 @@ if (isPrintMode) {
|
|||
// Activate every registered tool before starting the MCP transport.
|
||||
// `session.agent.state.tools` is the *active* subset, not the full
|
||||
// registry — if we expose only the active set, extension-registered
|
||||
// tools (gsd workflow, browser-tools, mac-tools, search-the-web, …)
|
||||
// tools (sf workflow, browser-tools, mac-tools, search-the-web, …)
|
||||
// are invisible to MCP clients. Flipping the active set to every
|
||||
// known tool name makes `state.tools` mirror the full registry for
|
||||
// this MCP session, which is what an external client expects.
|
||||
|
|
@ -635,7 +635,7 @@ if (isPrintMode) {
|
|||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Worktree subcommand — `gsd worktree <list|merge|clean|remove>`
|
||||
// Worktree subcommand — `sf worktree <list|merge|clean|remove>`
|
||||
// ---------------------------------------------------------------------------
|
||||
if (cliFlags.messages[0] === 'worktree' || cliFlags.messages[0] === 'wt') {
|
||||
const { handleList, handleMerge, handleClean, handleRemove } = await import('./worktree-cli.js')
|
||||
|
|
@ -676,8 +676,8 @@ if (!cliFlags.worktree && !isPrintMode) {
|
|||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Auto-redirect: `gsd auto` with piped stdout → headless mode (#2732)
|
||||
// When stdout is not a TTY (e.g. `gsd auto | cat`, `gsd auto > file`),
|
||||
// Auto-redirect: `sf auto` with piped stdout → headless mode (#2732)
|
||||
// When stdout is not a TTY (e.g. `sf auto | cat`, `sf auto > file`),
|
||||
// the TUI cannot render and the process hangs. Redirect to headless mode
|
||||
// which handles non-interactive output gracefully.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
|
@ -698,7 +698,7 @@ const cwd = process.cwd()
|
|||
const projectSessionsDir = getProjectSessionsDir(cwd)
|
||||
|
||||
// Migrate legacy flat sessions: before per-directory scoping, all .jsonl session
|
||||
// files lived directly in ~/.gsd/sessions/. Move them into the correct per-cwd
|
||||
// files lived directly in ~/.sf/sessions/. Move them into the correct per-cwd
|
||||
// subdirectory so /resume can find them.
|
||||
migrateLegacyFlatSessions(sessionsDir, projectSessionsDir)
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
*
|
||||
* Extensions without manifests always load (backwards compatible).
|
||||
* A fresh install has an empty registry — all extensions enabled by default.
|
||||
* The only way an extension stops loading is an explicit `gsd extensions disable <id>`.
|
||||
* The only way an extension stops loading is an explicit `sf extensions disable <id>`.
|
||||
*/
|
||||
|
||||
import { existsSync, mkdirSync, readFileSync, readdirSync, renameSync, writeFileSync } from "node:fs";
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
* Headless Context Loading — stdin reading, file context, and project bootstrapping
|
||||
*
|
||||
* Handles loading context from files or stdin for headless new-milestone,
|
||||
* and bootstraps the .gsd/ directory structure when needed.
|
||||
* and bootstraps the .sf/ directory structure when needed.
|
||||
*/
|
||||
|
||||
import { readFileSync, mkdirSync } from 'node:fs'
|
||||
|
|
@ -49,11 +49,11 @@ export async function loadContext(options: ContextOptions): Promise<string> {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Bootstrap .gsd/ directory structure for headless new-milestone.
|
||||
* Bootstrap .sf/ directory structure for headless new-milestone.
|
||||
* Mirrors the bootstrap logic from guided-flow.ts showSmartEntry().
|
||||
*/
|
||||
export function bootstrapGsdProject(basePath: string): void {
|
||||
const gsdDir = join(basePath, '.gsd')
|
||||
const gsdDir = join(basePath, '.sf')
|
||||
mkdirSync(join(gsdDir, 'milestones'), { recursive: true })
|
||||
mkdirSync(join(gsdDir, 'runtime'), { recursive: true })
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
/**
|
||||
* Headless Query — `gsd headless query`
|
||||
* Headless Query — `sf headless query`
|
||||
*
|
||||
* Single read-only command that returns the full project snapshot as JSON
|
||||
* to stdout, without spawning an LLM session. Instant (~50ms).
|
||||
|
|
@ -18,20 +18,20 @@ import { createJiti } from '@mariozechner/jiti'
|
|||
import { fileURLToPath } from 'node:url'
|
||||
import { join } from 'node:path'
|
||||
import { homedir } from 'node:os'
|
||||
import type { GSDState } from './resources/extensions/sf/types.js'
|
||||
import type { SFState } from './resources/extensions/sf/types.js'
|
||||
import { resolveBundledSourceResource } from './bundled-resource-path.js'
|
||||
|
||||
const jiti = createJiti(fileURLToPath(import.meta.url), { interopDefault: true, debug: false })
|
||||
// Resolve extensions from the synced agent directory so headless-query
|
||||
// loads the same extension copy as interactive/auto modes (#3471).
|
||||
// Falls back to bundled source for source-tree dev workflows.
|
||||
const agentExtensionsDir = join(process.env.SF_AGENT_DIR || join(homedir(), '.gsd', 'agent'), 'extensions', 'gsd')
|
||||
const agentExtensionsDir = join(process.env.SF_AGENT_DIR || join(homedir(), '.sf', 'agent'), 'extensions', 'sf')
|
||||
const { existsSync } = await import('node:fs')
|
||||
const useAgentDir = existsSync(join(agentExtensionsDir, 'state.ts'))
|
||||
const gsdExtensionPath = (...segments: string[]) =>
|
||||
useAgentDir
|
||||
? join(agentExtensionsDir, ...segments)
|
||||
: resolveBundledSourceResource(import.meta.url, 'extensions', 'gsd', ...segments)
|
||||
: resolveBundledSourceResource(import.meta.url, 'extensions', 'sf', ...segments)
|
||||
|
||||
async function loadExtensionModules() {
|
||||
const stateModule = await jiti.import(gsdExtensionPath('state.ts'), {}) as any
|
||||
|
|
@ -41,7 +41,7 @@ async function loadExtensionModules() {
|
|||
const autoStartModule = await jiti.import(gsdExtensionPath('auto-start.ts'), {}) as any
|
||||
return {
|
||||
openProjectDbIfPresent: autoStartModule.openProjectDbIfPresent as (basePath: string) => Promise<void>,
|
||||
deriveState: stateModule.deriveState as (basePath: string) => Promise<GSDState>,
|
||||
deriveState: stateModule.deriveState as (basePath: string) => Promise<SFState>,
|
||||
resolveDispatch: dispatchModule.resolveDispatch as (opts: any) => Promise<any>,
|
||||
readAllSessionStatuses: sessionModule.readAllSessionStatuses as (basePath: string) => any[],
|
||||
loadEffectiveGSDPreferences: prefsModule.loadEffectiveGSDPreferences as () => any,
|
||||
|
|
@ -51,7 +51,7 @@ async function loadExtensionModules() {
|
|||
// ─── Types ──────────────────────────────────────────────────────────────────
|
||||
|
||||
export interface QuerySnapshot {
|
||||
state: GSDState
|
||||
state: SFState
|
||||
next: {
|
||||
action: 'dispatch' | 'stop' | 'skip'
|
||||
unitType?: string
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
/**
|
||||
* Headless Orchestrator — `gsd headless`
|
||||
* Headless Orchestrator — `sf headless`
|
||||
*
|
||||
* Runs any /gsd subcommand without a TUI by spawning a child process in
|
||||
* Runs any /sf subcommand without a TUI by spawning a child process in
|
||||
* RPC mode, auto-responding to extension UI requests, and streaming
|
||||
* progress to stderr.
|
||||
*
|
||||
|
|
@ -289,7 +289,7 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
}
|
||||
}
|
||||
|
||||
// For new-milestone, load context and bootstrap .gsd/ before spawning RPC child
|
||||
// For new-milestone, load context and bootstrap .sf/ before spawning RPC child
|
||||
if (isNewMilestone) {
|
||||
if (!options.context && !options.contextText) {
|
||||
process.stderr.write('[headless] Error: new-milestone requires --context <file> or --context-text <text>\n')
|
||||
|
|
@ -304,11 +304,11 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
process.exit(1)
|
||||
}
|
||||
|
||||
// Bootstrap .gsd/ if needed
|
||||
const gsdDir = join(process.cwd(), '.gsd')
|
||||
// Bootstrap .sf/ if needed
|
||||
const gsdDir = join(process.cwd(), '.sf')
|
||||
if (!existsSync(gsdDir)) {
|
||||
if (!options.json) {
|
||||
process.stderr.write('[headless] Bootstrapping .gsd/ project structure...\n')
|
||||
process.stderr.write('[headless] Bootstrapping .sf/ project structure...\n')
|
||||
}
|
||||
bootstrapGsdProject(process.cwd())
|
||||
}
|
||||
|
|
@ -319,11 +319,11 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
writeFileSync(join(runtimeDir, 'headless-context.md'), contextContent, 'utf-8')
|
||||
}
|
||||
|
||||
// Validate .gsd/ directory (skip for new-milestone since we just bootstrapped it)
|
||||
const gsdDir = join(process.cwd(), '.gsd')
|
||||
// Validate .sf/ directory (skip for new-milestone since we just bootstrapped it)
|
||||
const gsdDir = join(process.cwd(), '.sf')
|
||||
if (!isNewMilestone && !existsSync(gsdDir)) {
|
||||
process.stderr.write('[headless] Error: No .gsd/ directory found in current directory.\n')
|
||||
process.stderr.write("[headless] Run 'gsd' interactively first to initialize a project.\n")
|
||||
process.stderr.write('[headless] Error: No .sf/ directory found in current directory.\n')
|
||||
process.stderr.write("[headless] Run 'sf' interactively first to initialize a project.\n")
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
|
|
@ -337,7 +337,7 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
// Resolve CLI path for the child process
|
||||
const cliPath = process.env.SF_BIN_PATH || process.argv[1]
|
||||
if (!cliPath) {
|
||||
process.stderr.write('[headless] Error: Cannot determine CLI path. Set SF_BIN_PATH or run via gsd.\n')
|
||||
process.stderr.write('[headless] Error: Cannot determine CLI path. Set SF_BIN_PATH or run via sf.\n')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
|
|
@ -759,7 +759,7 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
// v2 protocol negotiation — attempt init for structured completion events
|
||||
let v2Enabled = false
|
||||
try {
|
||||
await client.init({ clientId: 'gsd-headless' })
|
||||
await client.init({ clientId: 'sf-headless' })
|
||||
v2Enabled = true
|
||||
} catch {
|
||||
process.stderr.write('[headless] Warning: v2 init failed, falling back to v1 string-matching\n')
|
||||
|
|
@ -829,11 +829,11 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
}
|
||||
|
||||
if (!options.json) {
|
||||
process.stderr.write(`[headless] Running /gsd ${options.command}${options.commandArgs.length > 0 ? ' ' + options.commandArgs.join(' ') : ''}...\n`)
|
||||
process.stderr.write(`[headless] Running /sf ${options.command}${options.commandArgs.length > 0 ? ' ' + options.commandArgs.join(' ') : ''}...\n`)
|
||||
}
|
||||
|
||||
// Send the command
|
||||
const command = `/gsd ${options.command}${options.commandArgs.length > 0 ? ' ' + options.commandArgs.join(' ') : ''}`
|
||||
const command = `/sf ${options.command}${options.commandArgs.length > 0 ? ' ' + options.commandArgs.join(' ') : ''}`
|
||||
try {
|
||||
await client.prompt(command)
|
||||
} catch (err) {
|
||||
|
|
@ -846,7 +846,7 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
await completionPromise
|
||||
}
|
||||
|
||||
// Auto-mode chaining: if --auto and milestone creation succeeded, send /gsd auto
|
||||
// Auto-mode chaining: if --auto and milestone creation succeeded, send /sf auto
|
||||
if (isNewMilestone && options.auto && milestoneReady && !blocked && exitCode === EXIT_SUCCESS) {
|
||||
if (!options.json) {
|
||||
process.stderr.write('[headless] Milestone ready — chaining into auto-mode...\n')
|
||||
|
|
@ -863,7 +863,7 @@ async function runHeadlessOnce(options: HeadlessOptions, restartCount: number):
|
|||
})
|
||||
|
||||
try {
|
||||
await client.prompt('/gsd auto')
|
||||
await client.prompt('/sf auto')
|
||||
} catch (err) {
|
||||
process.stderr.write(`[headless] Error: Failed to start auto-mode: ${err instanceof Error ? err.message : String(err)}\n`)
|
||||
exitCode = EXIT_ERROR
|
||||
|
|
|
|||
102
src/help-text.ts
102
src/help-text.ts
|
|
@ -1,6 +1,6 @@
|
|||
const SUBCOMMAND_HELP: Record<string, string> = {
|
||||
config: [
|
||||
'Usage: gsd config',
|
||||
'Usage: sf config',
|
||||
'',
|
||||
'Re-run the interactive setup wizard to configure:',
|
||||
' - LLM provider (Anthropic, OpenAI, Google, OpenRouter, Ollama, LM Studio, etc.)',
|
||||
|
|
@ -15,7 +15,7 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
].join('\n'),
|
||||
|
||||
update: [
|
||||
'Usage: gsd update',
|
||||
'Usage: sf update',
|
||||
'',
|
||||
'Update SF to the latest version.',
|
||||
'',
|
||||
|
|
@ -23,7 +23,7 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
].join('\n'),
|
||||
|
||||
sessions: [
|
||||
'Usage: gsd sessions',
|
||||
'Usage: sf sessions',
|
||||
'',
|
||||
'List all saved sessions for the current directory and interactively',
|
||||
'pick one to resume. Shows date, message count, and a preview of the',
|
||||
|
|
@ -36,31 +36,31 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
].join('\n'),
|
||||
|
||||
install: [
|
||||
'Usage: gsd install <source> [-l, --local]',
|
||||
'Usage: sf install <source> [-l, --local]',
|
||||
'',
|
||||
'Install a package/extension source and run post-install validation (dependency checks, setup).',
|
||||
'',
|
||||
'Examples:',
|
||||
' gsd install npm:@foo/bar',
|
||||
' gsd install git:github.com/user/repo',
|
||||
' gsd install https://github.com/user/repo',
|
||||
' gsd install ./local/path',
|
||||
' sf install npm:@foo/bar',
|
||||
' sf install git:github.com/user/repo',
|
||||
' sf install https://github.com/user/repo',
|
||||
' sf install ./local/path',
|
||||
].join('\n'),
|
||||
|
||||
remove: [
|
||||
'Usage: gsd remove <source> [-l, --local]',
|
||||
'Usage: sf remove <source> [-l, --local]',
|
||||
'',
|
||||
'Remove an installed package source and its settings entry.',
|
||||
].join('\n'),
|
||||
|
||||
list: [
|
||||
'Usage: gsd list',
|
||||
'Usage: sf list',
|
||||
'',
|
||||
'List installed package sources from user and project settings.',
|
||||
].join('\n'),
|
||||
|
||||
worktree: [
|
||||
'Usage: gsd worktree <command> [args]',
|
||||
'Usage: sf worktree <command> [args]',
|
||||
'',
|
||||
'Manage isolated git worktrees for parallel work streams.',
|
||||
'',
|
||||
|
|
@ -71,35 +71,35 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
' remove <name> Remove a worktree (--force to remove with unmerged changes)',
|
||||
'',
|
||||
'The -w flag creates/resumes worktrees for interactive sessions:',
|
||||
' gsd -w Auto-name a new worktree, or resume the only active one',
|
||||
' gsd -w my-feature Create or resume a named worktree',
|
||||
' sf -w Auto-name a new worktree, or resume the only active one',
|
||||
' sf -w my-feature Create or resume a named worktree',
|
||||
'',
|
||||
'Lifecycle:',
|
||||
' 1. gsd -w Create worktree, start session inside it',
|
||||
' 1. sf -w Create worktree, start session inside it',
|
||||
' 2. (work normally) All changes happen on the worktree branch',
|
||||
' 3. Ctrl+C Exit — dirty work is auto-committed',
|
||||
' 4. gsd -w Resume where you left off',
|
||||
' 5. gsd worktree merge Squash-merge into main when done',
|
||||
' 4. sf -w Resume where you left off',
|
||||
' 5. sf worktree merge Squash-merge into main when done',
|
||||
'',
|
||||
'Examples:',
|
||||
' gsd -w Start in a new auto-named worktree',
|
||||
' gsd -w auth-refactor Create/resume "auth-refactor" worktree',
|
||||
' gsd worktree list See all worktrees and their status',
|
||||
' gsd worktree merge auth-refactor Merge and clean up',
|
||||
' gsd worktree clean Remove all merged/empty worktrees',
|
||||
' gsd worktree remove old-branch Remove a specific worktree',
|
||||
' gsd worktree remove old-branch --force Remove even with unmerged changes',
|
||||
' sf -w Start in a new auto-named worktree',
|
||||
' sf -w auth-refactor Create/resume "auth-refactor" worktree',
|
||||
' sf worktree list See all worktrees and their status',
|
||||
' sf worktree merge auth-refactor Merge and clean up',
|
||||
' sf worktree clean Remove all merged/empty worktrees',
|
||||
' sf worktree remove old-branch Remove a specific worktree',
|
||||
' sf worktree remove old-branch --force Remove even with unmerged changes',
|
||||
].join('\n'),
|
||||
|
||||
graph: [
|
||||
'Usage: gsd graph <subcommand> [options]',
|
||||
'Usage: sf graph <subcommand> [options]',
|
||||
'',
|
||||
'Manage the SF project knowledge graph. Reads .gsd/ artifacts and builds',
|
||||
'Manage the SF project knowledge graph. Reads .sf/ artifacts and builds',
|
||||
'a queryable graph of milestones, slices, tasks, rules, patterns, and lessons.',
|
||||
'',
|
||||
'Subcommands:',
|
||||
' build Parse .gsd/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,',
|
||||
' KNOWLEDGE.md) and write .gsd/graphs/graph.json atomically.',
|
||||
' build Parse .sf/ artifacts (STATE.md, milestone ROADMAPs, slice PLANs,',
|
||||
' KNOWLEDGE.md) and write .sf/graphs/graph.json atomically.',
|
||||
' query Search graph nodes by term (BFS from seed matches, budget-trimmed).',
|
||||
' Returns matching nodes and reachable edges within the token budget.',
|
||||
' status Show whether graph.json exists, its age, node/edge counts, and',
|
||||
|
|
@ -108,16 +108,16 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
' Returns added, removed, and changed nodes and edges.',
|
||||
'',
|
||||
'Examples:',
|
||||
' gsd graph build Build the graph from .gsd/ artifacts',
|
||||
' gsd graph status Check graph age and node/edge counts',
|
||||
' gsd graph query auth Find nodes related to "auth"',
|
||||
' gsd graph diff Show changes since last snapshot',
|
||||
' sf graph build Build the graph from .sf/ artifacts',
|
||||
' sf graph status Check graph age and node/edge counts',
|
||||
' sf graph query auth Find nodes related to "auth"',
|
||||
' sf graph diff Show changes since last snapshot',
|
||||
].join('\n'),
|
||||
|
||||
headless: [
|
||||
'Usage: gsd headless [flags] [command] [args...]',
|
||||
'Usage: sf headless [flags] [command] [args...]',
|
||||
'',
|
||||
'Run /gsd commands without the TUI. Default command: auto',
|
||||
'Run /sf commands without the TUI. Default command: auto',
|
||||
'',
|
||||
'Flags:',
|
||||
' --timeout N Overall timeout in ms (default: 300000)',
|
||||
|
|
@ -150,31 +150,31 @@ const SUBCOMMAND_HELP: Record<string, string> = {
|
|||
' stream-json Stream JSONL events to stdout in real time (same as --json)',
|
||||
'',
|
||||
'Examples:',
|
||||
' gsd headless Run /gsd auto',
|
||||
' gsd headless next Run one unit',
|
||||
' gsd headless --output-format json auto Structured JSON result on stdout',
|
||||
' gsd headless --json status Machine-readable JSONL stream',
|
||||
' gsd headless --timeout 60000 With 1-minute timeout',
|
||||
' gsd headless --bare auto Minimal context (CI/ecosystem use)',
|
||||
' gsd headless --resume abc123 auto Resume a prior session',
|
||||
' gsd headless new-milestone --context spec.md Create milestone from file',
|
||||
' cat spec.md | gsd headless new-milestone --context - From stdin',
|
||||
' gsd headless new-milestone --context spec.md --auto Create + auto-execute',
|
||||
' gsd headless --supervised auto Supervised orchestrator mode',
|
||||
' gsd headless --answers answers.json auto With pre-supplied answers',
|
||||
' gsd headless --events agent_end,extension_ui_request auto Filtered event stream',
|
||||
' gsd headless query Instant JSON state snapshot',
|
||||
' sf headless Run /sf auto',
|
||||
' sf headless next Run one unit',
|
||||
' sf headless --output-format json auto Structured JSON result on stdout',
|
||||
' sf headless --json status Machine-readable JSONL stream',
|
||||
' sf headless --timeout 60000 With 1-minute timeout',
|
||||
' sf headless --bare auto Minimal context (CI/ecosystem use)',
|
||||
' sf headless --resume abc123 auto Resume a prior session',
|
||||
' sf headless new-milestone --context spec.md Create milestone from file',
|
||||
' cat spec.md | sf headless new-milestone --context - From stdin',
|
||||
' sf headless new-milestone --context spec.md --auto Create + auto-execute',
|
||||
' sf headless --supervised auto Supervised orchestrator mode',
|
||||
' sf headless --answers answers.json auto With pre-supplied answers',
|
||||
' sf headless --events agent_end,extension_ui_request auto Filtered event stream',
|
||||
' sf headless query Instant JSON state snapshot',
|
||||
'',
|
||||
'Exit codes: 0 = success, 1 = error/timeout, 10 = blocked, 11 = cancelled',
|
||||
].join('\n'),
|
||||
}
|
||||
|
||||
// Alias: `gsd wt --help` → same as `gsd worktree --help`
|
||||
// Alias: `sf wt --help` → same as `sf worktree --help`
|
||||
SUBCOMMAND_HELP['wt'] = SUBCOMMAND_HELP['worktree']
|
||||
|
||||
export function printHelp(version: string): void {
|
||||
process.stdout.write(`SF v${version} — Singularity Forge\n\n`)
|
||||
process.stdout.write('Usage: gsd [options] [message...]\n\n')
|
||||
process.stdout.write('Usage: sf [options] [message...]\n\n')
|
||||
process.stdout.write('Options:\n')
|
||||
process.stdout.write(' --mode <text|json|rpc|mcp> Output mode (default: interactive)\n')
|
||||
process.stdout.write(' --print, -p Single-shot print mode\n')
|
||||
|
|
@ -196,9 +196,9 @@ export function printHelp(version: string): void {
|
|||
process.stdout.write(' sessions List and resume a past session\n')
|
||||
process.stdout.write(' worktree <cmd> Manage worktrees (list, merge, clean, remove)\n')
|
||||
process.stdout.write(' auto [args] Run auto-mode without TUI (pipeable)\n')
|
||||
process.stdout.write(' headless [cmd] [args] Run /gsd commands without TUI (default: auto)\n')
|
||||
process.stdout.write(' headless [cmd] [args] Run /sf commands without TUI (default: auto)\n')
|
||||
process.stdout.write(' graph <subcommand> Manage knowledge graph (build, query, status, diff)\n')
|
||||
process.stdout.write('\nRun gsd <subcommand> --help for subcommand-specific help.\n')
|
||||
process.stdout.write('\nRun sf <subcommand> --help for subcommand-specific help.\n')
|
||||
}
|
||||
|
||||
export function printSubcommandHelp(subcommand: string, version: string): boolean {
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ import { discoverExtensionEntryPaths } from './extension-discovery.js'
|
|||
import { loadRegistry, readManifestFromEntryPath, isExtensionEnabled } from './extension-registry.js'
|
||||
import { renderLogo } from './logo.js'
|
||||
|
||||
// pkg/ is a shim directory: contains gsd's piConfig (package.json) and pi's
|
||||
// pkg/ is a shim directory: contains sf's piConfig (package.json) and pi's
|
||||
// theme assets (dist/modes/interactive/theme/) without a src/ directory.
|
||||
// This allows config.js to:
|
||||
// 1. Read piConfig.name → "sf" (branding)
|
||||
|
|
@ -90,7 +90,7 @@ process.env.PI_PACKAGE_DIR = pkgDir
|
|||
process.env.PI_SKIP_VERSION_CHECK = '1' // SF runs its own update check in cli.ts — suppress pi's
|
||||
process.title = 'sf'
|
||||
|
||||
// Print branded banner on first launch (before ~/.gsd/ exists).
|
||||
// Print branded banner on first launch (before ~/.sf/ exists).
|
||||
// Set SF_FIRST_RUN_BANNER so cli.ts skips the duplicate welcome screen.
|
||||
if (!existsSync(appRoot)) {
|
||||
const cyan = '\x1b[36m'
|
||||
|
|
@ -107,22 +107,22 @@ if (!existsSync(appRoot)) {
|
|||
process.env.SF_FIRST_RUN_BANNER = '1'
|
||||
}
|
||||
|
||||
// SF_CODING_AGENT_DIR — tells pi's getAgentDir() to return ~/.gsd/agent/ instead of ~/.gsd/agent/
|
||||
// SF_CODING_AGENT_DIR — tells pi's getAgentDir() to return ~/.sf/agent/ instead of ~/.sf/agent/
|
||||
process.env.SF_CODING_AGENT_DIR = agentDir
|
||||
|
||||
// SF_PKG_ROOT — absolute path to sf-run package root. Used by deployed extensions
|
||||
// (e.g. auto.ts resume path) to import modules like resource-loader.js that live
|
||||
// in the package tree, not in the deployed ~/.gsd/agent/ tree.
|
||||
// in the package tree, not in the deployed ~/.sf/agent/ tree.
|
||||
process.env.SF_PKG_ROOT = gsdRoot
|
||||
|
||||
// RTK environment — make ~/.gsd/agent/bin visible to all child-process paths,
|
||||
// RTK environment — make ~/.sf/agent/bin visible to all child-process paths,
|
||||
// not just the bash tool, and force-disable RTK telemetry for SF-managed use.
|
||||
applyRtkProcessEnv(process.env)
|
||||
|
||||
// NODE_PATH — make gsd's own node_modules available to extensions loaded via jiti.
|
||||
// NODE_PATH — make sf's own node_modules available to extensions loaded via jiti.
|
||||
// Without this, extensions (e.g. browser-tools) can't resolve dependencies like
|
||||
// `playwright` because jiti resolves modules from pi-coding-agent's location, not gsd's.
|
||||
// Prepending gsd's node_modules to NODE_PATH fixes this for all extensions.
|
||||
// `playwright` because jiti resolves modules from pi-coding-agent's location, not sf's.
|
||||
// Prepending sf's node_modules to NODE_PATH fixes this for all extensions.
|
||||
const gsdNodeModules = join(gsdRoot, 'node_modules')
|
||||
process.env.NODE_PATH = [gsdNodeModules, process.env.NODE_PATH]
|
||||
.filter(Boolean)
|
||||
|
|
@ -137,12 +137,12 @@ const { Module } = await import('module');
|
|||
process.env.SF_VERSION = gsdVersion
|
||||
|
||||
// SF_BIN_PATH — absolute path to this loader (dist/loader.js), used by patched subagent
|
||||
// to spawn gsd instead of pi when dispatching workflow tasks.
|
||||
// Respect a pre-set value so a source-mode wrapper (e.g. bin/gsd-from-source) can
|
||||
// to spawn sf instead of pi when dispatching workflow tasks.
|
||||
// Respect a pre-set value so a source-mode wrapper (e.g. bin/sf-from-source) can
|
||||
// advertise the executable shim instead of the .ts loader path (which spawn() can't exec).
|
||||
process.env.SF_BIN_PATH = process.env.SF_BIN_PATH || process.argv[1]
|
||||
|
||||
// SF_WORKFLOW_PATH — absolute path to bundled SF-WORKFLOW.md, used by patched gsd extension
|
||||
// SF_WORKFLOW_PATH — absolute path to bundled SF-WORKFLOW.md, used by patched sf extension
|
||||
// when dispatching workflow prompts. Prefers dist/resources/ (stable, set at build time)
|
||||
// over src/resources/ (live working tree) — see resource-loader.ts for rationale.
|
||||
const distRes = join(gsdRoot, 'dist', 'resources')
|
||||
|
|
@ -152,7 +152,7 @@ process.env.SF_WORKFLOW_PATH = join(resourcesDir, 'SF-WORKFLOW.md')
|
|||
|
||||
// SF_BUNDLED_EXTENSION_PATHS — dynamically discovered bundled extension entry points.
|
||||
// Uses the shared discoverExtensionEntryPaths() to scan the bundled resources
|
||||
// directory, then remaps discovered paths to agentDir (~/.gsd/agent/extensions/)
|
||||
// directory, then remaps discovered paths to agentDir (~/.sf/agent/extensions/)
|
||||
// where initResources() will sync them.
|
||||
const bundledExtDir = join(resourcesDir, 'extensions')
|
||||
const agentExtDir = join(agentDir, 'extensions')
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ export async function startMcpServer(options: {
|
|||
}
|
||||
|
||||
const server = new Server(
|
||||
{ name: 'gsd', version },
|
||||
{ name: 'sf', version },
|
||||
{ capabilities: { tools: {} } },
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
/**
|
||||
* Models.json resolution with fallback to ~/.pi/agent/models.json
|
||||
*
|
||||
* SF uses ~/.gsd/agent/models.json, but for a smooth migration/development
|
||||
* SF uses ~/.sf/agent/models.json, but for a smooth migration/development
|
||||
* experience, this module provides resolution logic that:
|
||||
*
|
||||
* 1. Reads ~/.gsd/agent/models.json if it exists
|
||||
* 1. Reads ~/.sf/agent/models.json if it exists
|
||||
* 2. Falls back to ~/.pi/agent/models.json if SF file doesn't exist
|
||||
* 3. Merges both files if both exist (SF takes precedence)
|
||||
*/
|
||||
|
|
@ -21,7 +21,7 @@ const PI_MODELS_PATH = join(homedir(), '.pi', 'agent', 'models.json')
|
|||
* Resolve the path to models.json with fallback logic.
|
||||
*
|
||||
* Priority:
|
||||
* 1. ~/.gsd/agent/models.json (exists) → return this path
|
||||
* 1. ~/.sf/agent/models.json (exists) → return this path
|
||||
* 2. ~/.pi/agent/models.json (exists) → return this path (fallback)
|
||||
* 3. Neither exists → return SF path (will be created)
|
||||
*
|
||||
|
|
|
|||
|
|
@ -278,7 +278,7 @@ export async function runOnboarding(authStorage: AuthStorage): Promise<void> {
|
|||
if (remoteConfigured) {
|
||||
summaryLines.push(`${pc.green('✓')} Remote questions: ${remoteConfigured}`)
|
||||
} else {
|
||||
summaryLines.push(`${pc.dim('↷')} Remote questions: not configured — use /gsd remote inside SF`)
|
||||
summaryLines.push(`${pc.dim('↷')} Remote questions: not configured — use /sf remote inside SF`)
|
||||
}
|
||||
|
||||
if (toolKeyCount > 0) {
|
||||
|
|
@ -795,7 +795,7 @@ async function runRemoteQuestionsStep(
|
|||
{ value: 'discord', label: 'Discord', hint: 'receive questions in a Discord channel' },
|
||||
{ value: 'slack', label: 'Slack', hint: 'receive questions in a Slack channel' },
|
||||
{ value: 'telegram', label: 'Telegram', hint: 'receive questions via Telegram bot' },
|
||||
{ value: 'skip', label: 'Skip for now', hint: 'use /gsd remote inside SF later' },
|
||||
{ value: 'skip', label: 'Skip for now', hint: 'use /sf remote inside SF later' },
|
||||
)
|
||||
|
||||
const choice = await p.select({
|
||||
|
|
@ -968,12 +968,12 @@ async function runDiscordChannelStep(p: ClackModule, pc: PicoModule, token: stri
|
|||
const data = await res.json()
|
||||
guilds = Array.isArray(data) ? data : []
|
||||
} catch {
|
||||
p.log.warn('Could not fetch Discord servers — configure channel later with /gsd remote discord')
|
||||
p.log.warn('Could not fetch Discord servers — configure channel later with /sf remote discord')
|
||||
return null
|
||||
}
|
||||
|
||||
if (guilds.length === 0) {
|
||||
p.log.warn('Bot is not in any Discord servers — configure channel later with /gsd remote discord')
|
||||
p.log.warn('Bot is not in any Discord servers — configure channel later with /sf remote discord')
|
||||
return null
|
||||
}
|
||||
|
||||
|
|
@ -1001,12 +1001,12 @@ async function runDiscordChannelStep(p: ClackModule, pc: PicoModule, token: stri
|
|||
const data = await res.json()
|
||||
channels = Array.isArray(data) ? data.filter((ch: any) => ch.type === 0 || ch.type === 5) : []
|
||||
} catch {
|
||||
p.log.warn('Could not fetch channels — configure later with /gsd remote discord')
|
||||
p.log.warn('Could not fetch channels — configure later with /sf remote discord')
|
||||
return null
|
||||
}
|
||||
|
||||
if (channels.length === 0) {
|
||||
p.log.warn('No text channels found — configure later with /gsd remote discord')
|
||||
p.log.warn('No text channels found — configure later with /sf remote discord')
|
||||
return null
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { DefaultResourceLoader, sortExtensionPaths } from '@sf-run/pi-coding-agent'
|
||||
if (process.env.SF_DEBUG_EXTENSIONS) process.stderr.write("[gsd-debug] resource-loader.ts loaded\n")
|
||||
if (process.env.SF_DEBUG_EXTENSIONS) process.stderr.write("[sf-debug] resource-loader.ts loaded\n")
|
||||
import { createHash } from 'node:crypto'
|
||||
import { homedir } from 'node:os'
|
||||
import { chmodSync, copyFileSync, cpSync, existsSync, lstatSync, mkdirSync, openSync, closeSync, readFileSync, readlinkSync, readdirSync, rmSync, statSync, symlinkSync, unlinkSync, writeFileSync } from 'node:fs'
|
||||
|
|
@ -12,9 +12,9 @@ import { loadRegistry, readManifestFromEntryPath, isExtensionEnabled, ensureRegi
|
|||
// Resolve resources directory — prefer dist/resources/ (stable, set at build time)
|
||||
// over src/resources/ (live working tree, changes with git branch).
|
||||
//
|
||||
// Why this matters: with `npm link`, src/resources/ points into the gsd-2 repo's
|
||||
// Why this matters: with `npm link`, src/resources/ points into the sf-2 repo's
|
||||
// working tree. Switching branches there changes src/resources/ for ALL projects
|
||||
// that use gsd — causing stale/broken extensions to be synced to ~/.gsd/agent/.
|
||||
// that use sf — causing stale/broken extensions to be synced to ~/.sf/agent/.
|
||||
// dist/resources/ is populated by the build step (`npm run copy-resources`) and
|
||||
// reflects the built state, not the currently checked-out branch.
|
||||
const packageRoot = resolve(dirname(fileURLToPath(import.meta.url)), '..')
|
||||
|
|
@ -285,7 +285,7 @@ function copyDirRecursive(src: string, dest: string): void {
|
|||
*
|
||||
* Native ESM `import()` ignores NODE_PATH — it resolves packages by walking
|
||||
* up the directory tree from the importing file. Extension files synced to
|
||||
* ~/.gsd/agent/extensions/ have no ancestor node_modules, so imports of
|
||||
* ~/.sf/agent/extensions/ have no ancestor node_modules, so imports of
|
||||
* @sf-run/* packages fail. The symlink makes Node's standard resolution find
|
||||
* them without requiring every call site to use jiti.
|
||||
*
|
||||
|
|
@ -368,7 +368,7 @@ function reconcileMergedNodeModules(
|
|||
): void {
|
||||
// Fast path: if already merged for this packageRoot + same directory contents, skip.
|
||||
// The fingerprint includes entry names from both roots so `pnpm add/remove` triggers rebuild.
|
||||
const marker = join(agentNodeModules, '.gsd-merged')
|
||||
const marker = join(agentNodeModules, '.sf-merged')
|
||||
const fingerprint = mergedFingerprint(hoisted, internal)
|
||||
try {
|
||||
if (existsSync(marker) && readFileSync(marker, 'utf-8').trim() === fingerprint) return
|
||||
|
|
@ -440,7 +440,7 @@ function mergedFingerprint(hoisted: string, internal: string): string {
|
|||
* 1. Manifest-based (preferred): the manifest records which root files were installed
|
||||
* last time; any that are no longer in the current bundle are deleted.
|
||||
* 2. Known-stale fallback: for upgrades from versions before manifest tracking,
|
||||
* explicitly delete files known to have been moved (e.g. env-utils.js → gsd/).
|
||||
* explicitly delete files known to have been moved (e.g. env-utils.js → sf/).
|
||||
*/
|
||||
function pruneRemovedBundledExtensions(
|
||||
manifest: ManagedResourceManifest | null,
|
||||
|
|
@ -501,16 +501,16 @@ function pruneRemovedBundledExtensions(
|
|||
// Always remove known stale files regardless of manifest state.
|
||||
// These were installed by pre-manifest versions so they may not appear in
|
||||
// installedExtensionRootFiles even when a manifest exists.
|
||||
// env-utils.js was moved from extensions/ root → gsd/ in v2.39.x (#1634)
|
||||
// env-utils.js was moved from extensions/ root → sf/ in v2.39.x (#1634)
|
||||
removeFileIfStale('env-utils.js')
|
||||
}
|
||||
|
||||
/**
|
||||
* Syncs all bundled resources to agentDir (~/.gsd/agent/) on every launch.
|
||||
* Syncs all bundled resources to agentDir (~/.sf/agent/) on every launch.
|
||||
*
|
||||
* - extensions/ → ~/.gsd/agent/extensions/ (overwrite when version changes)
|
||||
* - agents/ → ~/.gsd/agent/agents/ (overwrite when version changes)
|
||||
* - SF-WORKFLOW.md → ~/.gsd/agent/SF-WORKFLOW.md (fallback for env var miss)
|
||||
* - extensions/ → ~/.sf/agent/extensions/ (overwrite when version changes)
|
||||
* - agents/ → ~/.sf/agent/agents/ (overwrite when version changes)
|
||||
* - SF-WORKFLOW.md → ~/.sf/agent/SF-WORKFLOW.md (fallback for env var miss)
|
||||
*
|
||||
* Skills are NOT synced here. They are installed by the user via the
|
||||
* skills.sh CLI (`npx skills add <repo>`) into ~/.agents/skills/ — the
|
||||
|
|
@ -518,10 +518,10 @@ function pruneRemovedBundledExtensions(
|
|||
*
|
||||
* Skips the copy when the managed-resources.json version matches the current
|
||||
* SF version, avoiding ~128ms of synchronous cpSync on every startup.
|
||||
* After `npm update -g @glittercowboy/gsd`, versions will differ and the
|
||||
* After `npm update -g @glittercowboy/sf`, versions will differ and the
|
||||
* copy runs once to land the new resources.
|
||||
*
|
||||
* Inspectable: `ls ~/.gsd/agent/extensions/`
|
||||
* Inspectable: `ls ~/.sf/agent/extensions/`
|
||||
*/
|
||||
export function initResources(agentDir: string): void {
|
||||
mkdirSync(agentDir, { recursive: true })
|
||||
|
|
@ -537,7 +537,7 @@ export function initResources(agentDir: string): void {
|
|||
pruneRemovedBundledExtensions(manifest, agentDir)
|
||||
pruneStaleSiblingFiles(bundledExtensionsDir, extensionsDir)
|
||||
|
||||
// Ensure ~/.gsd/agent/node_modules symlinks to SF's node_modules on EVERY
|
||||
// Ensure ~/.sf/agent/node_modules symlinks to SF's node_modules on EVERY
|
||||
// launch, not just during resource syncs. A stale/broken symlink makes ALL
|
||||
// extensions fail to resolve @sf-run/* packages, rendering SF non-functional.
|
||||
ensureNodeModulesSymlink(agentDir)
|
||||
|
|
@ -566,7 +566,7 @@ export function initResources(agentDir: string): void {
|
|||
// skills.sh CLI (`npx skills add <repo>`) into ~/.agents/skills/ which
|
||||
// is the industry-standard Agent Skills ecosystem directory.
|
||||
//
|
||||
// Migration from the legacy ~/.gsd/agent/skills/ directory is handled
|
||||
// Migration from the legacy ~/.sf/agent/skills/ directory is handled
|
||||
// above the manifest check so it runs on every launch (including retries
|
||||
// after partial copy failures).
|
||||
|
||||
|
|
@ -589,7 +589,7 @@ export function initResources(agentDir: string): void {
|
|||
|
||||
/**
|
||||
* One-time migration: copy user-customized skills from the old
|
||||
* ~/.gsd/agent/skills/ directory into ~/.agents/skills/.
|
||||
* ~/.sf/agent/skills/ directory into ~/.agents/skills/.
|
||||
*
|
||||
* The migration is conservative:
|
||||
* - Only skill directories containing a SKILL.md are considered.
|
||||
|
|
@ -653,7 +653,7 @@ function migrateSkillsToEcosystemDir(agentDir: string): void {
|
|||
if (isSymlink) {
|
||||
// Recreate the symlink in the ecosystem directory using an absolute
|
||||
// target. Relative symlinks would resolve from the new parent dir
|
||||
// (~/.agents/skills/) instead of the original (~/.gsd/agent/skills/),
|
||||
// (~/.agents/skills/) instead of the original (~/.sf/agent/skills/),
|
||||
// pointing to the wrong location.
|
||||
const rawTarget = readlinkSync(sourcePath)
|
||||
const absTarget = resolve(dirname(sourcePath), rawTarget)
|
||||
|
|
@ -716,7 +716,7 @@ export function hasStaleCompiledExtensionSiblings(extensionsDir: string, sourceD
|
|||
|
||||
/**
|
||||
* Constructs a DefaultResourceLoader that loads extensions from both
|
||||
* ~/.gsd/agent/extensions/ (SF's default) and ~/.pi/agent/extensions/ (pi's default).
|
||||
* ~/.sf/agent/extensions/ (SF's default) and ~/.pi/agent/extensions/ (pi's default).
|
||||
* This allows users to use extensions from either location.
|
||||
*/
|
||||
// Cache bundled extension keys at module load — avoids re-scanning the extensions
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@
|
|||
*
|
||||
* ## Setup
|
||||
*
|
||||
* Add to ~/.gsd/agent/settings.json (or project-level .gsd/settings.json):
|
||||
* Add to ~/.sf/agent/settings.json (or project-level .sf/settings.json):
|
||||
*
|
||||
* { "awsAuthRefresh": "aws sso login --profile my-profile" }
|
||||
*
|
||||
|
|
@ -55,10 +55,10 @@ const AWS_AUTH_ERROR_RE =
|
|||
|
||||
/**
|
||||
* Reads the `awsAuthRefresh` command from settings.json.
|
||||
* Checks project-level first, then global (~/.gsd/agent/settings.json).
|
||||
* Checks project-level first, then global (~/.sf/agent/settings.json).
|
||||
*/
|
||||
function getAwsAuthRefreshCommand(): string | undefined {
|
||||
const configDir = process.env.PI_CONFIG_DIR || ".gsd";
|
||||
const configDir = process.env.PI_CONFIG_DIR || ".sf";
|
||||
const paths = [
|
||||
join(process.cwd(), configDir, "settings.json"),
|
||||
join(homedir(), configDir, "agent", "settings.json"),
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ export function formatTimeAgo(timestamp: number): string {
|
|||
|
||||
function deriveProjectRootFromAutoWorktree(cachedCwd?: string): string | undefined {
|
||||
if (!cachedCwd) return undefined;
|
||||
const match = cachedCwd.match(/^(.*?)[\\/]\.gsd[\\/]worktrees[\\/][^\\/]+(?:[\\/].*)?$/);
|
||||
const match = cachedCwd.match(/^(.*?)[\\/]\.sf[\\/]worktrees[\\/][^\\/]+(?:[\\/].*)?$/);
|
||||
return match?.[1];
|
||||
}
|
||||
|
||||
|
|
@ -83,7 +83,7 @@ export function resolveBgShellPersistenceCwd(
|
|||
pathExists: (path: string) => boolean = existsSync,
|
||||
): string {
|
||||
const resolvedLiveCwd = liveCwd ?? getBgShellLiveCwd(cachedCwd, pathExists);
|
||||
const cachedIsAutoWorktree = /(?:^|[\\/])\.gsd[\\/]worktrees[\\/]/.test(cachedCwd);
|
||||
const cachedIsAutoWorktree = /(?:^|[\\/])\.sf[\\/]worktrees[\\/]/.test(cachedCwd);
|
||||
if (!cachedIsAutoWorktree) return cachedCwd;
|
||||
if (cachedCwd === resolvedLiveCwd && pathExists(cachedCwd)) return cachedCwd;
|
||||
if (!pathExists(cachedCwd)) return resolvedLiveCwd;
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue