export const SCAFFOLD_FILES = [
	{
		path: ".siftignore",
		content: `.git/**
.sf/**
.bg-shell/**
.pytest_cache/**
.venv/**
venv/**
node_modules/**
**/node_modules/**
**/__pycache__/**
*.pyc
*.egg-info/**
build/**
dist/**
target/**
vendor/**
coverage/**
.cache/**
tmp/**
*.log
`,
	},
	{
		path: "AGENTS.md",
		content: `# Agent Map

Keep this file short. Use it as a table of contents for agents and humans.

- Treat the repo as a purpose-to-software pipeline: intent -> purpose/consumer/contract/evidence -> tests -> implementation -> verification.
- Read \`ARCHITECTURE.md\` first for the system map and invariants.
- Read \`docs/PLANS.md\` and \`docs/exec-plans/active/\` for current work.
- Read \`docs/QUALITY_SCORE.md\`, \`docs/RELIABILITY.md\`, and \`docs/SECURITY.md\` before changing production behavior.
- Put durable product decisions in \`docs/product-specs/\`.
- Put durable design and architecture decisions in \`docs/design-docs/\`.
- Put generated reference material in \`docs/generated/\`.
- Use \`docs/RECORDS_KEEPER.md\` as the repo-order checklist after meaningful changes.
- Use the \`records-keeper\` skill when repo docs, plans, or architecture records need triage.
- Follow deeper \`AGENTS.md\` files when present. The closest one to the changed file wins.

Before implementation, inspect the relevant docs and source files, state observed facts before inferred facts, name the real consumer, and define the command or eval that proves the change.
`,
	},
	{
		path: "src/AGENTS.md",
		content: `# Source Agent Notes

- Start by mapping the owning module and its tests.
- Preserve existing public contracts unless the active plan explicitly changes them.
- Prefer typed/domain helpers over ad hoc parsing or duplicated logic.
- Keep edits scoped to the smallest module boundary that satisfies the plan.
- Update \`ARCHITECTURE.md\` when a source change creates a new subsystem or invariant.
`,
	},
	{
		path: "tests/AGENTS.md",
		content: `# Test Agent Notes

- Treat tests as executable specs, not coverage decoration.
- Add regression tests for changed behavior and failure modes.
- Prefer focused tests that name the behavior under test.
- Include the exact verification command in the plan or completion summary.
`,
	},
	{
		path: "ARCHITECTURE.md",
		content: `# Architecture

This file is the short map of the codebase. Keep it current and compact.

## Purpose

Describe the product, its users, and the job this repository exists to do.

## Codemap

- \`src/\`: primary implementation.
- \`tests/\`: behavior and regression coverage.
- \`docs/\`: durable product, design, plan, reliability, and security context.

## Invariants

- Prefer small, named modules with clear ownership.
- Behavior changes need tests or an explicit eval.
- Keep generated artifacts out of hand-written design docs.
- Update this map when new top-level concepts or directories become important.
`,
	},
	{
		path: "docs/design-docs/index.md",
		content: `# Design Docs

Durable design decisions live here. Link active proposals, completed decisions, and rejected alternatives.
`,
	},
	{
		path: "docs/AGENTS.md",
		content: `# Docs Agent Notes

- Docs are the durable project memory. Keep them concise, navigable, and current.
- Treat \`docs/adr/0000-purpose-to-software-compiler.md\` as the root SF product contract.
- Put stable decisions here; keep transient execution state in active plans.
- Prefer links to source paths, commands, and eval artifacts over broad prose.
- When docs and code disagree, inspect the code and update the stale document.
- Run the records keeper checklist in \`RECORDS_KEEPER.md\` after meaningful code, product, or architecture changes.
`,
	},
	{
		path: "docs/records/AGENTS.md",
		content: `# Records Agent Notes

- Keep repository memory ordered, current, and easy to inspect.
- Prefer moving durable facts to the narrowest canonical document over duplicating them.
- Preserve historical decisions; mark superseded records instead of deleting useful context.
- Escalate conflicts between docs and source by citing the exact files that disagree.
`,
	},
	{
		path: "docs/records/index.md",
		content: `# Records

This folder holds repo-memory audits, decision ledgers, context-gardening notes, and records-keeper outputs.
`,
	},
	{
		path: "docs/RECORDS_KEEPER.md",
		content: `# Records Keeper

The records keeper keeps repo memory ordered after meaningful changes. Run this checklist at milestone close, after architecture changes, after product behavior changes, and whenever docs/source disagree.

Use the \`records-keeper\` skill for this workflow when SF skills are available. Use \`context-doctor\` instead when stale state lives under \`.sf/\` or the memory store.

## Canonical Homes

- Root \`AGENTS.md\`: short routing map for agents.
- \`ARCHITECTURE.md\`: short system map, boundaries, invariants, critical flows, and verification.
- \`docs/product-specs/\`: durable user-facing behavior and product decisions.
- \`docs/design-docs/\`: durable design and architecture decisions.
- \`docs/exec-plans/\`: active/completed work plans and technical debt.
- \`docs/generated/\`: generated references only.
- \`docs/records/\`: audits, ledgers, and context-gardening outputs.

## Checklist

- Root map is current: \`AGENTS.md\` points to the right canonical docs and local \`AGENTS.md\` files.
- Architecture is current: new subsystems, boundaries, invariants, data/state, or critical flows are reflected in \`ARCHITECTURE.md\`.
- Product specs are current: user-visible behavior changes are reflected in \`docs/product-specs/\`.
- Execution plans are filed: active work is in \`docs/exec-plans/active/\`; completed summaries and evidence are in \`docs/exec-plans/completed/\`.
- Debt is visible: discovered cleanup is listed in \`docs/exec-plans/tech-debt-tracker.md\`.
- Generated docs are marked: generated material stays under \`docs/generated/\` or clearly says how to regenerate it.
- Contradictions are resolved: stale docs are updated or marked superseded with links to the source of truth.
- Verification is recorded: changed checks, evals, and commands are listed in the relevant plan or quality document.

## Output

When records work is non-trivial, write a dated note under \`docs/records/\` with:

- What changed.
- What canonical docs were updated.
- What contradictions were found.
- What remains unresolved.
`,
	},
	{
		path: "docs/design-docs/AGENTS.md",
		content: `# Design Doc Agent Notes

- Capture problem, context, options, decision, consequences, and validation.
- Separate observed facts from inferred product or architecture intent.
- Record rejected alternatives when they would prevent repeated debate.
`,
	},
	{
		path: "docs/design-docs/core-beliefs.md",
		content: `# Core Beliefs

- The repo should explain itself to humans and agents.
- Plans should carry acceptance criteria, falsifiers, and verification commands.
- Architecture should be mechanically checkable where possible.
`,
	},
	{
		path: "docs/exec-plans/active/index.md",
		content: `# Active Execution Plans

Link active plans here. Each plan should state purpose, scope, tasks, acceptance criteria, and verification.
`,
	},
	{
		path: "docs/exec-plans/AGENTS.md",
		content: `# Execution Plan Agent Notes

- Every plan needs purpose, scope, tasks, acceptance criteria, falsifier, and verification.
- Active plans live in \`active/\`; completed evidence summaries live in \`completed/\`.
- Add discovered cleanup to \`tech-debt-tracker.md\` instead of hiding it in chat.
`,
	},
	{
		path: "docs/exec-plans/completed/index.md",
		content: `# Completed Execution Plans

Move finished plan summaries here with evidence links and follow-up debt.
`,
	},
	{
		path: "docs/exec-plans/tech-debt-tracker.md",
		content: `# Tech Debt Tracker

Track cleanup discovered during implementation. Include owner, impact, proposed fix, and verification.
`,
	},
	{
		path: "docs/generated/db-schema.md",
		content: `# Database Schema

Generated or refreshed schema notes belong here. Do not hand-maintain stale schema copies.
`,
	},
	{
		path: "docs/product-specs/index.md",
		content: `# Product Specs

Durable user-facing behavior, workflows, and product decisions live here.
`,
	},
	{
		path: "docs/product-specs/AGENTS.md",
		content: `# Product Spec Agent Notes

- Describe the user, job-to-be-done, workflow, edge cases, and non-goals.
- Keep implementation details out unless they are product-visible constraints.
- Update specs when behavior changes, especially onboarding, permissions, billing, or destructive actions.
`,
	},
	{
		path: "docs/product-specs/new-user-onboarding.md",
		content: `# New User Onboarding

Describe the first-run experience, success criteria, and failure states when this product has an onboarding flow.
`,
	},
	{
		path: "docs/references/design-system-reference-llms.txt",
		content: `Reference slot for design-system guidance intended for LLM consumption.
`,
	},
	{
		path: "docs/references/nixpacks-llms.txt",
		content: `Reference slot for Nixpacks deployment/build guidance intended for LLM consumption.
`,
	},
	{
		path: "docs/references/uv-llms.txt",
		content: `Reference slot for uv/Python tooling guidance intended for LLM consumption.
`,
	},
	{
		path: "docs/DESIGN.md",
		content: `# Design

Record interaction patterns, visual constraints, and design-system usage here.
`,
	},
	{
		path: "docs/FRONTEND.md",
		content: `# Frontend

Record frontend architecture, component ownership, accessibility constraints, and browser support here.
`,
	},
	{
		path: "docs/PLANS.md",
		content: `# Plans

Use this as the index for current and upcoming work. Link detailed plans in \`docs/exec-plans/\`.
`,
	},
	{
		path: "docs/PRODUCT_SENSE.md",
		content: `# Product Sense

Capture user goals, non-goals, tradeoffs, and examples of good product judgment for this repo.
`,
	},
	{
		path: "docs/QUALITY_SCORE.md",
		content: `# Quality Score

Define what good looks like for this repo. Include fast checks, slow checks, evals, and known blind spots.

Use these principles:

- Make code legible to agents with semantic names and explicit boundaries.
- Prefer small, testable modules over files that require broad context to edit.
- Enforce style, architecture, and reliability rules mechanically where possible.
- Keep a cleanup loop for stale docs, generated artifacts, and accumulated implementation debt.
`,
	},
	{
		path: "docs/RELIABILITY.md",
		content: `# Reliability

Document expected failure modes, recovery paths, observability, and release checks here.
`,
	},
	{
		path: "docs/SECURITY.md",
		content: `# Security

Document trust boundaries, secrets handling, dependency risk, and security review requirements here.
`,
	},
	{
		path: "docs/design-docs/ADR-TEMPLATE.md",
		content: `# ADR-NNN: Title

**Status:** Proposed | Accepted | Rejected | Superseded by ADR-NNN
**Date:** YYYY-MM-DD

## Context

What is the problem or situation that requires a decision? Include constraints and the forces at play.

## Decision

What is the change being made or the approach being adopted?

## Consequences

What becomes easier or harder after this decision? Include positive and negative outcomes.

## Alternatives Considered

What other options were evaluated and why were they not chosen?
`,
	},
	{
		path: ".sf/harness/AGENTS.md",
		content: `# Harness Agent Notes

The harness is SF-local operational scaffolding the agent can read and verify against.

- \`specs/\`: behavior contracts. Each spec states what "done" looks like and the command that proves it.
- \`evals/\`: task definitions for behaviors tests cannot cover — model output quality, multi-turn flows, agent decisions.
- \`graders/\`: reusable grader scripts (code-based checks, LLM-judge prompts used by evals).

**Rule:** Before marking a task done, run the relevant spec's verification command. Record the result in the completion summary or execution plan.
`,
	},
	{
		path: ".sf/harness/specs/AGENTS.md",
		content: `# Harness Specs Agent Notes

Each spec file in this directory:

- States the behavior being specified (not the implementation).
- Includes the exact command that proves the spec passes.
- Is referenced by the relevant execution plan or ADR.

Write the spec before implementation. Run it after. Record the result.
`,
	},
	{
		path: ".sf/harness/specs/bootstrap.md",
		content: `# Bootstrap Spec: Agent Legibility

Verifies that this repo is minimally agent-legible.

## Criteria

- [ ] \`AGENTS.md\` exists at repo root and is non-empty.
- [ ] \`ARCHITECTURE.md\` exists at repo root and is non-empty.
- [ ] \`docs/exec-plans/active/\` exists.
- [ ] \`docs/exec-plans/tech-debt-tracker.md\` exists.
- [ ] \`docs/design-docs/ADR-TEMPLATE.md\` exists.

## Verification command

\`\`\`bash
for f in AGENTS.md ARCHITECTURE.md docs/exec-plans/active/index.md docs/exec-plans/tech-debt-tracker.md docs/design-docs/ADR-TEMPLATE.md .sf/harness/specs/bootstrap.md; do [ -s "$f" ] && echo "OK: $f" || echo "MISSING: $f"; done
\`\`\`

All lines should start with \`OK:\` for the bootstrap spec to pass.
`,
	},
	{
		path: ".sf/harness/evals/AGENTS.md",
		content: `# Harness Evals Agent Notes

Evals verify behavior that unit tests cannot cover — model output quality, agent decisions, multi-turn flows.

Each eval should include:
- The input fixture or prompt
- The expected output or scoring rubric
- The command to run it (\`promptfoo eval\`, custom script, etc.)

Keep evals deterministic where possible. Log results to \`docs/records/\` at milestone close.
`,
	},
	{
		path: ".sf/harness/graders/AGENTS.md",
		content: `# Harness Graders Agent Notes

Graders are reusable scripts or prompts that score eval outputs.

- Code-based graders: shell scripts or test files that check structured outputs deterministically.
- LLM-judge graders: prompt templates that ask a model to score free-text output against a rubric.

Prefer code-based graders. Add LLM-judge graders only when deterministic checking is impossible.
`,
	},
	{
		path: ".sf/PRINCIPLES.md",
		content: `# Principles

Durable design philosophy. Things this codebase believes are true.

Add entries as you make decisions. Each entry: 1-2 sentences. Cite the rationale (the why, not just the what).

## Examples

- (replace with your own)
`,
	},
	{
		path: ".sf/TASTE.md",
		content: `# Taste

What good code looks like here. Idioms, conventions, "we prefer X over Y" calls.

Add entries as you notice patterns worth preserving. Each entry: 1-2 sentences with a concrete example.

## Examples

- (replace with your own)
`,
	},
	{
		path: ".sf/ANTI-GOALS.md",
		content: `# Anti-goals

What we explicitly DON'T want. Things that look attractive but we've decided against.

This is gold — most wrong agent calls come from not knowing what to avoid. Each entry: 1-2 sentences with the rationale.

## Examples

- (replace with your own)
`,
	},
];
