--- description: Create comprehensive feature plan with deep codebase analysis and research --- # Plan a new task ## Mission Transform a feature request into a **comprehensive implementation plan** through systematic codebase analysis, external research, and strategic planning. **Core Principle**: We do NOT write code in this phase. Our goal is to create a context-rich implementation plan that enables one-pass implementation success for ai agents. **Key Philosophy**: Context is King. The plan must contain ALL information needed for implementation - patterns, mandatory reading, documentation, validation commands - so the execution agent succeeds on the first attempt. ## Planning Process ### Phase 1: Feature Understanding **Deep Feature Analysis:** - Extract the core problem being solved - Identify user value and business impact - Determine feature type: New Capability/Enhancement/Refactor/Bug Fix - Assess complexity: Low/Medium/High - Map affected systems and components **Create User Story Format Or Refine If Story Was Provided By The User:** ``` As a I want to So that ``` ### Phase 2: Codebase Intelligence Gathering **Use specialized agents and parallel analysis:** **1. Project Structure Analysis** - Detect primary language(s), frameworks, and runtime versions - Map directory structure and architectural patterns - Identify service/component boundaries and integration points - Locate configuration files (pyproject.toml, package.json, etc.) - Find environment setup and build processes **2. Pattern Recognition** (Use specialized subagents when beneficial) - Search for similar implementations in codebase - Identify coding conventions: - Naming patterns (CamelCase, snake_case, kebab-case) - File organization and module structure - Error handling approaches - Logging patterns and standards - Extract common patterns for the feature's domain - Document anti-patterns to avoid - Check CLAUDE.md for project-specific rules and conventions **3. Dependency Analysis** - Catalog external libraries relevant to feature - Understand how libraries are integrated (check imports, configs) - Find relevant documentation in docs/, ai_docs/, .agents/reference or ai-wiki if available - Note library versions and compatibility requirements - ALWAYS use browser_subagent tool and consult https://developer.unipile.com/docs/ if the task is related to LinkedIn **4. Testing Patterns** - Identify test framework and structure (pytest, jest, etc.) - Find similar test examples for reference - Understand test organization (unit vs integration) - Note coverage requirements and testing standards **5. Integration Points** - Identify existing files that need updates - Determine new files that need creation and their locations - Map router/API registration patterns - Understand database/model patterns if applicable - Identify authentication/authorization patterns if relevant **Clarify Ambiguities:** - If requirements are unclear at this point, ask the user to clarify before you continue - Get specific implementation preferences (libraries, approaches, patterns) - Resolve architectural decisions before proceeding ### Phase 3: External Research & Documentation **Use specialized subagents when beneficial for external research:** **Documentation Gathering:** - Research latest library versions and best practices - Find official documentation with specific section anchors - Locate implementation examples and tutorials - Identify common gotchas and known issues - Check for breaking changes and migration guides **Technology Trends:** - Research current best practices for the technology stack - Find relevant blog posts, guides, or case studies - Identify performance optimization patterns - Document security considerations **Compile Research References:** ```markdown ## Relevant Documentation - [Library Official Docs](https://example.com/docs#section) - Specific feature implementation guide - Why: Needed for X functionality - [Framework Guide](https://example.com/guide#integration) - Integration patterns section - Why: Shows how to connect components ``` ### Phase 4: Deep Strategic Thinking **Think Harder About:** - How does this feature fit into the existing architecture? - What are the critical dependencies and order of operations? - What could go wrong? (Edge cases, race conditions, errors) - How will this be tested comprehensively? - What performance implications exist? - Are there security considerations? - How maintainable is this approach? **Design Decisions:** - Choose between alternative approaches with clear rationale - Design for extensibility and future modifications - Plan for backward compatibility if needed - Consider scalability implications ### Phase 5: Plan Structure Generation **Create comprehensive plan with the following structure:** Whats below here is a template for you to fill for th4e implementation agent: ```markdown # Feature: The following plan should be complete, but its important that you validate documentation and codebase patterns and task sanity before you start implementing. Pay special attention to naming of existing utils types and models. Import from the right files etc. ## Feature Description ## User Story As a I want to So that ## Problem Statement ## Solution Statement ## Feature Metadata **Feature Type**: [New Capability/Enhancement/Refactor/Bug Fix] **Estimated Complexity**: [Low/Medium/High] **Primary Systems Affected**: [List of main components/services] **Dependencies**: [External libraries or services required] --- ## CONTEXT REFERENCES ### Relevant Codebase Files IMPORTANT: YOU MUST READ THESE FILES BEFORE IMPLEMENTING! - `path/to/file.py` (lines 15-45) - Why: Contains pattern for X that we'll mirror - `path/to/model.py` (lines 100-120) - Why: Database model structure to follow - `path/to/test.py` - Why: Test pattern example ### New Files to Create - `path/to/new_service.py` - Service implementation for X functionality - `path/to/new_model.py` - Data model for Y resource - `tests/path/to/test_new_service.py` - Unit tests for new service ### Relevant Documentation YOU SHOULD READ THESE BEFORE IMPLEMENTING! - [Documentation Link 1](https://example.com/doc1#section) - Specific section: Authentication setup - Why: Required for implementing secure endpoints - [Documentation Link 2](https://example.com/doc2#integration) - Specific section: Database integration - Why: Shows proper async database patterns ### Patterns to Follow **Naming Conventions:** (for example) **Error Handling:** (for example) **Logging Pattern:** (for example) **Other Relevant Patterns:** (for example) --- ## IMPLEMENTATION PLAN ### Phase 1: Foundation **Tasks:** - Set up base structures (schemas, types, interfaces) - Configure necessary dependencies - Create foundational utilities or helpers ### Phase 2: Core Implementation **Tasks:** - Implement core business logic - Create service layer components - Add API endpoints or interfaces - Implement data models ### Phase 3: Integration **Tasks:** - Connect to existing routers/handlers - Register new components - Update configuration files - Add middleware or interceptors if needed ### Phase 4: Testing & Validation **Tasks:** - Implement unit tests for each component - Create integration tests for feature workflow - Add edge case tests - Validate against acceptance criteria --- ## STEP-BY-STEP TASKS IMPORTANT: Execute every task in order, top to bottom. Each task is atomic and independently testable. ### Task Format Guidelines Use information-dense keywords for clarity: - **CREATE**: New files or components - **UPDATE**: Modify existing files - **ADD**: Insert new functionality into existing code - **REMOVE**: Delete deprecated code - **REFACTOR**: Restructure without changing behavior - **MIRROR**: Copy pattern from elsewhere in codebase ### {ACTION} {target_file} - **IMPLEMENT**: {Specific implementation detail} - **PATTERN**: {Reference to existing pattern - file:line} - **IMPORTS**: {Required imports and dependencies} - **GOTCHA**: {Known issues or constraints to avoid} - **VALIDATE**: `{executable validation command}` --- ## TESTING STRATEGY ### Unit Tests Design unit tests with fixtures and assertions following existing testing approaches ### Integration Tests ### Edge Cases --- ## VALIDATION COMMANDS Execute every command to ensure zero regressions and 100% feature correctness. ### Level 1: Syntax & Style ### Level 2: Unit Tests ### Level 3: Integration Tests ### Level 4: Manual Validation ### Level 5: Additional Validation (Optional) --- ## ACCEPTANCE CRITERIA - [ ] Feature implements all specified functionality - [ ] All validation commands pass with zero errors - [ ] Unit test coverage meets requirements (80%+) - [ ] Integration tests verify end-to-end workflows - [ ] Code follows project conventions and patterns - [ ] No regressions in existing functionality - [ ] Documentation is updated (if applicable) - [ ] Performance meets requirements (if applicable) - [ ] Security considerations addressed (if applicable) --- ## COMPLETION CHECKLIST - [ ] All tasks completed in order - [ ] Each task validation passed immediately - [ ] All validation commands executed successfully - [ ] Full test suite passes (unit + integration) - [ ] No linting or type checking errors - [ ] Manual testing confirms feature works - [ ] Acceptance criteria all met - [ ] Code reviewed for quality and maintainability --- ## NOTES ``` ## Output Format **Filename**: `.agents/plans/{kebab-case-descriptive-name}.md` - Replace `{kebab-case-descriptive-name}` with short, descriptive feature name - Examples: `add-user-authentication.md`, `implement-search-api.md`, `refactor-database-layer.md` **Directory**: Create `.agents/plans/` if it doesn't exist ## Quality Criteria ### Context Completeness ✓ - [ ] All necessary patterns identified and documented - [ ] External library usage documented with links - [ ] Integration points clearly mapped - [ ] Gotchas and anti-patterns captured - [ ] Every task has executable validation command ### Implementation Ready ✓ - [ ] Another developer could execute without additional context - [ ] Tasks ordered by dependency (can execute top-to-bottom) - [ ] Each task is atomic and independently testable - [ ] Pattern references include specific file:line numbers ### Pattern Consistency ✓ - [ ] Tasks follow existing codebase conventions - [ ] New patterns justified with clear rationale - [ ] No reinvention of existing patterns or utils - [ ] Testing approach matches project standards ### Informat