Spider.md Logo

Spider.md - Context as Infrastructure

Turn AI Conversations into Permanent Knowledge

The SP(IDE)R methodology treats conversations with AI assistants as valuable artifacts worth preserving. Spider.md captures your ideation sessions, decision trees, and problem-solving patterns in structured markdown so nothing gets lost in chat scroll.

Document how you structure information for AI consumption, how conversations evolve through iteration, and which interaction patterns produce the best results. This meta-layer of context makes every future AI conversation more effective.

Your best ideas often emerge in conversation with AI. Stop letting them evaporate. Start capturing AI interaction patterns as versioned knowledge that compounds over time.

SP(IDE)R Methodology Best Practices

Practical techniques for capturing, structuring, and reusing the valuable context that emerges from your conversations with AI assistants.

Capture the Seed Prompt

Document the initial prompt that started a productive AI conversation. Great conversations start with great seeds. Build a library of effective opening prompts organized by task type so your team can skip the trial-and-error phase.

Map Decision Branches

When a conversation explores multiple approaches, document each branch with its trade-offs. These decision trees become reusable context - next time a similar choice arises, the analysis is already done.

Document Pivots and Why

Record when and why you changed direction mid-conversation. Pivots contain learning - they reveal which approaches fail and which constraints matter most. Failed paths are context that prevents future missteps.

Extract Reusable Insights

After a productive conversation, distill the key insights into standalone notes. Separate the durable knowledge from the conversational noise. These distilled insights become building blocks for future context files.

Chain Conversations Intentionally

Reference previous conversation artifacts in new sessions. Build cumulative understanding by explicitly linking related conversations. AI assistants perform better when they can see the evolution of your thinking.

Tag by Problem Domain

Organize captured conversations by problem type - architecture, debugging, refactoring, design. Well-tagged conversation archives become a searchable knowledge base that your team mines for patterns and precedents.

Analyze Conversation Patterns

Review your captured conversations periodically. Which prompt structures produce the best results? Which conversation flows lead to breakthroughs? Meta-analysis of your AI interactions improves your interaction methodology.

Share Conversation Artifacts

Make captured conversations available to the team. One developer's breakthrough conversation becomes everyone's reference material. Shared conversation artifacts multiply the return on every AI interaction.

Conversations Are Context That Compounds

Every productive AI conversation generates context that is valuable beyond the immediate task. The decision analysis, the explored alternatives, the refined understanding - these are intellectual assets that disappear when you close the chat window. SP(IDE)R treats conversations as first-class artifacts. Capture them, tag them, and reference them. The most valuable context in your repository might be the thinking process, not just the conclusion.

The Spider Template

Spider.md
# Spider.md - SP(IDE)R Methodology
<!-- Specification, Pseudocode, Implementation, Debugging, Evaluation, Review -->
<!-- A structured approach to AI-assisted development with full traceability -->
<!-- Last updated: YYYY-MM-DD -->

## What is SP(IDE)R?

**SP(IDE)R** stands for **S**pecification, **P**seudocode, **I**mplementation, **D**ebugging, **E**valuation, **R**eview - a six-phase methodology for turning requirements into production code with AI assistance. Each phase produces artifacts that feed into the next, creating a complete audit trail from "what do we need?" to "is it working correctly?"

### Why SP(IDE)R?

Most developers use AI assistants by jumping straight to "write me the code." This works for small tasks but fails for complex features because:

1. The AI lacks context about constraints, trade-offs, and existing patterns
2. Generated code often misses edge cases that become production bugs
3. There is no record of why the code was written this way
4. Debugging AI-generated code is harder when you did not understand the approach

SP(IDE)R fixes this by making each phase explicit. You and the AI think together through the problem before writing a single line of code.

### The Six Phases

```
S - Specification    Define WHAT we are building and WHY
P - Pseudocode       Design HOW it works at a high level
I - Implementation   Write the actual code
D - Debugging        Find and fix issues systematically
E - Evaluation       Verify it meets the specification
R - Review           Assess quality, performance, and maintainability
```

## Phase 1: Specification

### Example: Adding Team Billing to SaaS Platform

```markdown
# SPEC: Team Billing Feature
Date: YYYY-MM-DD
Author: Sarah Chen
Status: Approved

## Problem Statement
Currently, each user has an individual subscription. Customers with 5+ users
are asking for team billing so one person can manage payment for the group.
We are losing deals to competitors who offer this. Sales reports 12 lost
deals in Q3 totaling $86K ARR due to missing team billing.

## Requirements

### Functional
1. A user can create a "team" and invite members by email
2. The team owner manages billing for all members
3. Members do not see billing details - only the owner and billing admins
4. Team pricing: $15/seat/month (vs $20/month individual) - minimum 5 seats
5. Adding a member mid-cycle prorates the charge
6. Removing a member does not issue a refund but reduces next invoice
7. Team owner can designate up to 2 "billing admins" who can manage payment methods

### Non-Functional
- Billing changes must be reflected within 60 seconds (near-real-time)
- All billing events must be logged for audit (SOC 2 requirement)
- Must integrate with existing Stripe subscription infrastructure
- Must not break individual user billing (backward compatible)

### Out of Scope (for this iteration)
- Team hierarchy (nested teams)
- Usage-based billing per team
- Team-level feature flags (all members get same plan)

## Constraints
- Stripe API - we use Stripe Subscriptions with metered billing
- Database: PostgreSQL - need migration for new tables
- Auth: Must respect existing role-based access control (RBAC)
- Timeline: Must ship by end of Sprint 48 (4 weeks)

## Success Criteria
- 10 teams created within first month of launch
- 0 billing errors in first 30 days
- No regression in individual billing flows
```

### Specification Template

```markdown
# SPEC: [Feature Name]
Date: YYYY-MM-DD
Author: [Name]
Status: [Draft | In Review | Approved | Superseded]

## Problem Statement
[What problem does this solve? Who is affected? What is the business impact?]

## Requirements

### Functional
1. [Requirement with measurable acceptance criteria]
2. [Another requirement]

### Non-Functional
- [Performance, security, scalability, compliance requirements]

### Out of Scope
- [Explicitly list what this feature does NOT include]

## Constraints
[Technical, business, timeline, and resource constraints]

## Success Criteria
[How do we know this feature is successful? Measurable outcomes.]

## Open Questions
- [ ] [Question that needs answering before implementation]
- [ ] [Another open question]
```

## Phase 2: Pseudocode

### Example: Team Billing Pseudocode

```markdown
# PSEUDOCODE: Team Billing Feature
Spec: SPEC-team-billing
Date: YYYY-MM-DD

## Data Model Changes

New tables:
  teams
    - id (UUID, PK)
    - name (varchar)
    - owner_id (FK -> users.id)
    - stripe_subscription_id (varchar, nullable)
    - seat_count (integer, default 5)
    - created_at, updated_at

  team_members
    - id (UUID, PK)
    - team_id (FK -> teams.id)
    - user_id (FK -> users.id)
    - role (enum: owner, billing_admin, member)
    - joined_at
    - UNIQUE(team_id, user_id)

  team_billing_events
    - id (UUID, PK)
    - team_id (FK -> teams.id)
    - event_type (enum: member_added, member_removed, plan_changed, payment_failed)
    - actor_id (FK -> users.id)
    - metadata (jsonb)
    - created_at

## Core Flows

### Create Team
1. Validate: user does not already own a team
2. Create team record with owner
3. Add owner as team_member with role=owner
4. Create Stripe subscription with quantity=5 (minimum seats)
5. Migrate owner's individual subscription to team (cancel individual, activate team)
6. Log billing event: team_created

### Add Member
1. Validate: actor is owner or billing_admin
2. Validate: invited email is not already on this team
3. Validate: team has available seats (or auto-increase seat count)
4. Send invitation email
5. On acceptance:
   a. Create team_member record
   b. Update Stripe subscription quantity
   c. Prorate charge for remaining billing period
   d. Cancel member's individual subscription (if they had one)
   e. Log billing event: member_added

### Remove Member
1. Validate: actor is owner or billing_admin
2. Validate: cannot remove the owner (must transfer ownership first)
3. Soft-delete team_member record (set removed_at)
4. Decrease Stripe subscription quantity (effective next billing cycle)
5. User reverts to free tier (no automatic individual subscription)
6. Log billing event: member_removed

## Edge Cases
- User invited to team but already has annual individual plan
  -> Prorate refund on individual plan, then add to team
- Team owner's payment method fails
  -> 3-day grace period, then downgrade all members to free tier
- Last billing admin removed
  -> Ownership reverts to team owner automatically
- Team reduced below 5 seats
  -> Keep billing at 5-seat minimum, show warning
```

### Pseudocode Template

```markdown
# PSEUDOCODE: [Feature Name]
Spec: [Link to specification]
Date: YYYY-MM-DD

## Data Model Changes
[New tables, columns, indexes, or schema modifications]

## Core Flows
[Step-by-step logic for each major operation]

## Edge Cases
[List edge cases and how each is handled]

## Integration Points
[External APIs, services, or systems this feature touches]

## Error Handling
[How each type of failure is handled]
```

## Phase 3: Implementation

### Implementation Session Log

```markdown
# IMPLEMENTATION: Team Billing
Pseudocode: PSEUDO-team-billing
Date: YYYY-MM-DD
Developer: Sarah Chen
AI Assistant: Claude Code

## Session 1: Database Schema (45 min)

### Prompt to AI
"Create the Prisma schema additions for team billing based on this
pseudocode: [pasted pseudocode data model section]

Follow our existing patterns in schema.prisma. Use UUID for IDs,
include created_at/updated_at on all tables."

### AI Output
[Generated Prisma schema - reviewed and approved with 2 modifications]

### Modifications Made
1. Added cascade delete on team_members when team is deleted
2. Changed seat_count default from 5 to null (set explicitly on creation)

### Files Created/Modified
- prisma/schema.prisma (modified - added Team, TeamMember, TeamBillingEvent)
- prisma/migrations/20240115_add_team_billing/ (generated)

## Session 2: Service Layer (90 min)
[Follow same pattern: prompt, AI output, modifications, files changed]

## Session 3: [Next implementation session]
[Continue pattern]
```

## Phase 4: Debugging

### Debugging Session Template

```markdown
# DEBUG: [Issue Title]
Date: YYYY-MM-DD
Severity: [P0-Critical | P1-High | P2-Medium | P3-Low]
Feature: [Related feature/spec]

## Symptoms
[What is happening? Include exact error messages, screenshots, logs]

## Reproduction Steps
1. [Step-by-step to reproduce]
2. [Be specific about data, state, and timing]

## Investigation

### Hypothesis 1: [Description]
Evidence for: [What supports this theory]
Evidence against: [What contradicts it]
Test: [How to confirm or reject]
Result: [Confirmed/Rejected]

### Hypothesis 2: [Description]
Evidence for: [What supports this theory]
Test: [How to confirm or reject]
Result: [Confirmed - this was the root cause]

## Root Cause
[Detailed explanation of what went wrong and why]

## Fix
[Description of the fix, with file paths and code changes]

## Verification
[How the fix was verified - tests added, manual testing steps]

## Prevention
[What can we do to prevent similar bugs? Linting rules, tests, patterns]
```

## Phase 5: Evaluation

### Evaluation Checklist

```markdown
# EVALUATION: [Feature Name]
Spec: [Link to specification]
Date: YYYY-MM-DD
Evaluator: [Name]

## Specification Compliance
- [ ] Requirement 1: [Met/Not Met/Partially Met] - [Notes]
- [ ] Requirement 2: [Met/Not Met/Partially Met] - [Notes]
- [ ] Requirement 3: [Met/Not Met/Partially Met] - [Notes]

## Test Coverage
- Unit tests: [X]% coverage on new code
- Integration tests: [List of integration test scenarios]
- E2E tests: [List of end-to-end test scenarios]
- Edge cases tested: [List from pseudocode edge cases]

## Performance
- [Metric 1]: [Measured value] vs [Target value]
- [Metric 2]: [Measured value] vs [Target value]

## Security Review
- [ ] Input validation on all user-facing endpoints
- [ ] Authorization checks on all team operations
- [ ] No PII in logs
- [ ] Stripe webhook signature verification

## Accessibility
- [ ] Keyboard navigation works for all new UI elements
- [ ] Screen reader compatibility verified
- [ ] Color contrast meets WCAG AA standards

## Deployment Readiness
- [ ] Database migration tested on staging data copy
- [ ] Feature flag in place for gradual rollout
- [ ] Rollback plan documented
- [ ] Monitoring and alerts configured
- [ ] Runbook updated with new operational procedures
```

## Phase 6: Review

### Review Session Template

```markdown
# REVIEW: [Feature Name]
Date: YYYY-MM-DD
Reviewers: [Names]
Outcome: [Approved | Approved with Changes | Needs Rework]

## Code Quality Assessment
- Readability: [1-5] - [Notes]
- Maintainability: [1-5] - [Notes]
- Test quality: [1-5] - [Notes]
- Error handling: [1-5] - [Notes]

## Architecture Fit
- Does this follow existing patterns? [Yes/No - details]
- Are there any new patterns introduced? [If yes, are they justified?]
- Technical debt introduced: [None/Acceptable/Needs cleanup ticket]

## Knowledge Transfer
- Documentation updated: [Yes/No]
- Team walkthrough completed: [Yes/No]
- On-call runbook updated: [Yes/No]

## Action Items
- [ ] [Action item 1] - Owner: [Name] - Due: [Date]
- [ ] [Action item 2] - Owner: [Name] - Due: [Date]

## Lessons Learned
- [What went well in this implementation?]
- [What would we do differently next time?]
- [What should we add to our coding standards or CLAUDE.md?]
```

## Decision Log

### Tracking Decisions Across Phases

| ID | Phase | Date | Decision | Rationale | Alternatives Considered |
|----|-------|------|----------|-----------|------------------------|
| D-001 | Spec | YYYY-MM-DD | Minimum 5 seats for team billing | Aligns with target customer segment (5+ users), simplifies pricing | 3-seat minimum, no minimum |
| D-002 | Pseudo | YYYY-MM-DD | Use Stripe Subscriptions (not Invoices) | Handles proration automatically, matches existing billing code | Manual invoice generation |
| D-003 | Impl | YYYY-MM-DD | Prisma $transaction for team creation | Atomic operation - if Stripe fails, team record is rolled back | Saga pattern (over-engineered) |
| D-004 | Debug | YYYY-MM-DD | Grace period is 3 days, not 7 | Reduces revenue leakage, matches industry standard | 7 days, immediate downgrade |
| [ID] | [Phase] | [Date] | [What was decided] | [Why] | [What else was considered] |

## Best Practices

1. **Do not skip phases** - Even for small features, write at least a brief spec and pseudocode. The 15 minutes you invest saves hours of rework.
2. **Use AI for each phase** - AI is not just for implementation. Use it to review specs for gaps, challenge your pseudocode for edge cases, and generate evaluation checklists.
3. **Keep sessions small** - Each implementation session should be 60-120 minutes. If it is taking longer, break the feature into smaller pieces.
4. **Record deviations** - When the implementation differs from the pseudocode, document why. This is the most valuable knowledge for future developers.
5. **Review with the full chain** - During code review, the reviewer should read the spec and pseudocode first, not just the code. Context makes reviews faster and better.
6. **Archive completed sessions** - Move finished sessions to a `completed/` directory. They become searchable documentation for similar future features.

Why Markdown Matters for AI-Native Development

Conversation as Context

SP(IDE)R methodology treats conversations with AI as permanent artifacts. Spider.md captures ideation sessions, decision trees, and context evolution in structured markdown. Your development conversation history becomes queryable knowledge. Nothing gets lost in chat scroll.

Iterative Context Building

Great solutions emerge through iteration. Spider.md documents the evolution of your thinking - false starts, pivots, and breakthroughs. AI assistants learn from your problem-solving patterns. The journey to the solution becomes as valuable as the solution itself.

Meta-Context Engineering

Spider.md is context about building context. It documents how you structure information for AI consumption, how you evolve conversations, and how you extract value from AI interactions. This meta-layer makes your entire development process more AI-native.

"The SP(IDE)R methodology recognizes a fundamental truth: the conversations we have with AI assistants contain valuable context worth preserving. Spider.md turns ephemeral chat history into permanent, versioned knowledge that compounds over time."

Explore More Templates

About Spider.md

Our Mission

Built by researchers exploring how AI-assisted development conversations become permanent knowledge.

We are investigating a paradigm shift: what if conversations with AI assistants aren't ephemeral chat logs but permanent knowledge artifacts? Spider.md explores treating development dialogue as versioned context - capturing ideation sessions, decision trees, and problem-solving patterns in markdown. These conversations contain insights worth preserving, insights that compound value over time.

Our research goal is to demonstrate that AI conversation history is too valuable to lose in chat scroll. When your development discussions with AI are captured as structured markdown, they become searchable knowledge, reusable patterns, and training data for better AI collaboration. This meta-layer of context makes every future AI interaction more valuable.

Why Markdown Matters

AI-Native

LLMs parse markdown better than any other format. Fewer tokens, cleaner structure, better results.

Version Control

Context evolves with code. Git tracks changes, PRs enable review, history preserves decisions.

Human Readable

No special tools needed. Plain text that works everywhere. Documentation humans actually read.

Exploring SP(IDE)R methodology? Have insights on AI conversation patterns? We're actively researching - join us.