Claude Code Insights

1,179 messages across 91 sessions (218 total) | 2026-02-25 to 2026-03-31

At a Glance
What's working: You've built a genuinely powerful workflow around parallel-agent PR reviews and structured artifact-driven development — the 25-task project rename session and multi-agent reviews catching critical bugs across 500+ files are standout examples. You're also effective at driving large monorepo changes (linting migrations, feature implementations, doc restructuring) all the way through to committed MRs in single sessions. Impressive Things You Did →
What's hindering you: On Claude's side, it frequently picks the wrong approach on first attempt — especially struggling to locate UI elements in your codebase and overstepping the scope of what you asked for (simplifying becomes refactoring, targeted fixes become feature removals). On your side, several sessions stalled because Claude lacked enough upfront context to find the right files or understand your constraints, particularly for UI bug reports where describing the visual issue wasn't enough to point Claude to the right component. Where Things Go Wrong →
Quick wins to try: Turn your parallel PR review and code simplification workflows into custom skills (/commands) so you stop re-explaining the process and avoid the interrupted restarts that plagued several sessions. Also try hooks to auto-run your Biome lint checks before commits — this would eliminate the pre-commit hook failures that blocked you multiple times. Features to Try →
Ambitious workflows: Your Figma-to-MR designer-dev skill prototype is the one to invest in: as models get better at visual understanding and multi-step autonomous execution, that workflow could go from prototype to a fully autonomous pipeline that reads Figma, generates matching TypeScript components, and opens the MR. Similarly, your TDD bug-fix approach (70/71 tasks completed in one session) should become your default — expect to hand off entire bug tickets where Claude reproduces via failing tests, iterates to green, and commits without intervention. On the Horizon →
1,179
Messages
+17,674/-1,566
Lines
345
Files
28
Days
42.1
Msgs/Day

What You Work On

Skyline Monorepo Development ~22 sessions
Full-stack TypeScript monorepo work including contract management features (filtering, review, PDF preview), organization management UI fixes (hiding IDs, empty name defaults), linting migration from ESLint to Biome, and general maintenance. Claude Code was used heavily for multi-file edits, code reviews, test fixes, and commit/push workflows, with notable friction around lint configurations and UI element location.
Slack Agent / Bot Development ~10 sessions
Building and iterating on a Slack bot agent using Claude SDK, including designing agent skills (designer-dev Figma-to-MR workflow), implementing TDD with observability, and debugging integration issues like duplicate events, thread replies, and stale processes. Claude Code was used for collaborative architecture design, iterative bug fixing, and committing large changesets.
PR Review & Code Quality ~12 sessions
Running comprehensive PR reviews using parallel review agents across large MRs (500+ files), posting findings to GitLab, and performing code simplification passes. Claude Code was used for automated multi-agent review workflows, identifying critical bugs, and iterating on OpenSpec PR feedback including config extensions and schema changes.
Git Operations & DevOps Workflows ~15 sessions
Frequent git operations including branch management, commit/push workflows, MR creation on GitLab, and project directory renaming (soundrise→tequila). Claude Code handled branch renaming, commit message crafting, and navigating pre-commit hook failures, though friction arose from hook lint errors and disallowed shell substitutions.
Documentation & Translation ~8 sessions
Translating PDFs from English to Traditional Chinese with technical term preservation, generating development summaries for meetings, restructuring documentation, and exploring feasibility of a PDF translation tool. Claude Code was used for document generation via WeasyPrint, structured content creation, and quick knowledge queries about tooling and configuration.
What You Wanted
Git Operations
18
Code Review
14
Feature Implementation
13
Bug Fix
12
Quick Question
9
Commit And Push
8
Top Tools Used
Bash
1315
Read
757
Edit
556
Write
252
Grep
185
Mcp Jetbrains Open File In Editor
161
Languages
Markdown
624
TypeScript
481
Rust
145
JSON
108
YAML
53
JavaScript
35
Session Types
Iterative Refinement
21
Multi Task
20
Single Task
15
Quick Question
9
Exploration
7

How You Use Claude Code

You are a prolific, hands-on power user who treats Claude Code as a core part of your daily development workflow — 91 sessions across just 5 weeks with 285 hours logged shows you're essentially pair-programming with Claude full-time. Your work spans a broad monorepo involving TypeScript, Rust, and Markdown-heavy documentation, and you leverage Claude for everything from git operations and PR reviews to feature implementation and bug fixes. You frequently use parallel review agents for large MRs (519 changed files in one session), build structured artifact workflows for complex tasks like project-wide renames, and drive multi-file refactors across your stack. Your top tool usage — Bash (1,315), Read (757), Edit (556) — confirms you let Claude do heavy exploration and execution rather than just answering questions.

Your interaction style is iterative and corrective rather than detailed upfront specification. You tend to give a high-level goal and then steer Claude through multiple rounds of feedback when it goes off track. This shows up clearly in friction patterns: you had to manually refine workflow steps through v2, v3, v4 iterations for the spectra integration; you pushed back when the /simplify skill went beyond scope by removing features; and you corrected Claude's JSDoc style and Biome lint approaches multiple times. The 37 instances of "wrong_approach" friction — by far your biggest pain point — reflect this pattern of launching Claude at a task and course-correcting. You also interrupt frequently when things aren't working (multiple interrupted PR reviews, git pulls, and simplify commands), sometimes switching models mid-session when performance disappoints.

Despite the friction, you're clearly getting strong value — 67% of outcomes were fully or mostly achieved, and you rated the vast majority of sessions as satisfying. You're particularly effective when using Claude for structured, multi-step workflows like the 25-task directory rename with verification and MR creation, or the TDD-driven slack-agent session with 70/71 tasks completed. Your weaker sessions tend to be UI debugging where Claude can't visually confirm issues (the recurring Skyline org management page saga across 6+ sessions is a notable example of persistent friction). You also have a number of throwaway sessions — quick model switches, plugin installs, memory checks — suggesting you treat Claude Code as an always-open tool you dip into casually throughout the day.

Key pattern: You launch tasks with minimal upfront specification and steer through rapid iteration and frequent interruption, treating Claude as a persistent co-pilot you course-correct in real time.
User Response Time Distribution
2-10s
58
10-30s
161
30s-1m
183
1-2m
161
2-5m
165
5-15m
92
>15m
45
Median: 68.3s • Average: 215.2s
Multi-Clauding (Parallel Sessions)
4
Overlap Events
8
Sessions Involved
1%
Of Messages

You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions overlap in time, suggesting parallel workflows.

User Messages by Time of Day
Morning (6-12)
398
Afternoon (12-18)
460
Evening (18-24)
83
Night (0-6)
238
Tool Errors Encountered
Command Failed
70
Other
52
User Rejected
32
File Not Found
9
File Too Large
4
Edit Failed
2

Impressive Things You Did

Over 91 sessions and 285 hours in the past month, you've built an impressive workflow combining code review, monorepo management, and AI-assisted development across TypeScript and Rust projects.

Parallel Agent PR Reviews
You've built a sophisticated code review workflow that deploys multiple parallel review agents across hundreds of changed files, then consolidates findings into actionable reports posted directly to GitLab MRs. This has helped you catch critical bugs and maintain quality across a large codebase.
Structured Artifact-Driven Development
You use a disciplined task-based approach where complex changes like project renaming and bug investigation are broken into structured artifacts with numbered tasks. Your session renaming soundrise→tequila showcased this perfectly — 25 tasks implemented, verified, committed, and MR-created in one flow.
Full-Stack Monorepo Orchestration
You're effectively using Claude Code to manage sweeping changes across your monorepo — from migrating ESLint to Biome, to implementing contract management features, to restructuring docs and translations. You consistently drive multi-file changes to completion with commits and MRs, averaging 21 successful multi-file sessions.
What Helped Most (Claude's Capabilities)
Multi-file Changes
21
Good Explanations
17
Proactive Help
9
Good Debugging
6
Correct Code Edits
6
Fast/Accurate Search
3
Outcomes
Not Achieved
9
Partially Achieved
9
Mostly Achieved
20
Fully Achieved
31
Unclear
3

Where Things Go Wrong

Your sessions show a pattern of Claude taking wrong approaches that require repeated corrections, struggling to locate UI elements in your codebase, and overstepping the scope of requested changes.

Wrong Approach Requiring Multiple Corrections
Claude frequently misidentifies the right strategy on first attempt, forcing you into multiple rounds of correction. You could reduce this by providing more upfront context or constraints in your initial prompt, especially for tasks involving specific tooling or conventions.
  • Claude searched for 'remote-control' as a skill file instead of recognizing it as a CLI built-in feature, requiring you to provide documentation before it understood
  • The vitest @/ alias approach failed due to a Vite global alias limitation Claude didn't anticipate, and gitlab-push-mr failed from disallowed $() substitution — triggering strong frustration
Difficulty Locating UI Elements in Codebase
Across multiple sessions you reported visible UI issues (like exposed ID columns) that Claude repeatedly failed to find in the code, leading to unproductive back-and-forth. Consider pointing Claude to specific files or providing screenshots early to short-circuit the search process.
  • You reported an unwanted ID column in the org management page but Claude searched wrong tables (org list, members) before finally finding it in the users page
  • Claude searched the code and claimed the ID wasn't displayed at all, directly contradicting what you were seeing on screen, and had to ask you for clarification
Overstepping Scope of Requested Changes
Claude tends to expand beyond what you ask for — fixing 'dangerous behaviors,' removing features, or writing excessive code when you wanted targeted changes. You can mitigate this by explicitly stating boundaries like 'only simplify, do not remove functionality or fix unrelated issues.'
  • You ran /simplify on a staging diff but Claude went beyond simplification by fixing dangerous behaviors and removing features, prompting pushback
  • Claude wrote JSDoc with too little information and jumped straight to editing without explaining its plan first, requiring you to correct its style choices around Promise<boolean> disclosure
Primary Friction Types
Wrong Approach
37
Buggy Code
16
Misunderstood Request
14
User Rejected Action
9
Excessive Changes
6
Tool Error
3
Inferred Satisfaction (model-estimated)
Frustrated
11
Dissatisfied
21
Likely Satisfied
145
Satisfied
67
Happy
5

Existing CC Features to Try

Suggested CLAUDE.md Additions

Just copy this into Claude Code to add it to your CLAUDE.md.

Across 5+ sessions involving Skyline org management UI fixes, Claude repeatedly couldn't find where UI elements (like ID columns) were rendered, requiring multiple rounds of user correction.
Multiple sessions showed friction with Biome lint failures blocking commits and requiring several read-edit cycles to resolve formatting issues.
The /simplify skill and code review sessions repeatedly caused friction when Claude went beyond the requested scope, prompting user pushback.
Multiple sessions were blocked by pre-commit hook failures on pre-existing lint errors, causing frustration and wasted time.
These details were consistent across 70+ sessions and would prevent Claude from making wrong assumptions about tooling.

Just copy this into Claude Code and it'll set it up for you.

Custom Skills
Reusable prompts for repetitive workflows triggered by a single /command.
Why for you: You already use skills like /simplify and PR review. With 18 git_operations and 8 commit_and_push sessions, a /commit-mr skill that handles lint-check → commit → push → create MR would eliminate your most common friction points.
mkdir -p .claude/skills/commit-mr && cat > .claude/skills/commit-mr/SKILL.md << 'EOF' # Commit and Create MR 1. Run `biome check --apply .` to fix lint issues 2. Stage all changes with `git add -A` 3. Generate a conventional commit message from the diff 4. Commit (use --no-verify only if pre-existing lint errors block) 5. Push to current branch 6. Create MR on GitLab with description summarizing changes EOF
Hooks
Auto-run shell commands at specific lifecycle events like before committing.
Why for you: Your #1 friction is 'wrong_approach' (37 occurrences) and buggy_code (16). A pre-edit hook running biome check and a post-edit hook running type checks would catch issues before they snowball into multi-iteration fixes.
# Add to .claude/settings.json: { "hooks": { "postToolUse": [ { "matcher": "Edit|Write", "hooks": [ { "type": "command", "command": "npx biome check --apply $(echo $CLAUDE_FILE 2>/dev/null || echo .)" } ] } ] } }
Headless Mode
Run Claude non-interactively from scripts and CI/CD.
Why for you: You already run parallel PR review agents on 500+ file MRs. Headless mode in your GitLab CI would automate PR reviews on every MR without manual triggering, saving significant time on your 14 code_review sessions.
# In .gitlab-ci.yml: ai-review: stage: review script: - claude -p "Review this MR diff for bugs, security issues, and code quality. Focus on TypeScript and Rust files. Post findings as MR comments." --allowedTools "Read,Bash,Grep,Glob,mcp__jetbrains__open_file_in_editor"

New Ways to Use Claude Code

Just copy this into Claude Code and it'll walk you through it.

Reduce git workflow friction with structured prompts
Bundle your git operations into explicit multi-step prompts instead of open-ended requests.
26 of your 72 sessions involved git_operations or commit_and_push, and these frequently hit friction from lint hooks, wrong branch names, or MR creation issues. By giving Claude a structured prompt with explicit steps, you avoid the back-and-forth. This also prevents the common pattern where Claude tries to commit before running lint checks.
Paste into Claude Code:
Run biome check and fix any issues. Then stage all changes, commit with a conventional commit message based on the diff, push to the current branch, and create a GitLab MR. If pre-commit hooks fail on pre-existing errors, use --no-verify and note it.
Front-load context for UI bug fixes
Include the file path or component name when reporting UI issues instead of just describing the visual problem.
Across 6+ Skyline UI sessions, Claude repeatedly couldn't locate where UI elements were rendered, leading to multiple failed searches and user frustration. The org management ID column issue alone spanned 5 sessions. Providing a file path or even a grep hint (e.g., 'check the users table component') would have resolved these in one round-trip instead of five.
Paste into Claude Code:
There's an unwanted ID column showing in the org management page. Search all TSX/Vue files under src/ for table column definitions that reference 'id' or 'ID'. Show me what you find before making any changes.
Use explicit scope constraints to prevent over-editing
Add 'ONLY do X, do NOT do Y' guardrails when asking for targeted changes.
Your top friction type is 'wrong_approach' (37 instances), with 'excessive_changes' (6) and scope creep being recurring themes. The /simplify session and JSDoc session both showed Claude going beyond what was asked. Explicit constraints dramatically reduce this. This is especially important for review/refactor tasks where Claude tends to 'helpfully' fix adjacent issues.
Paste into Claude Code:
Simplify ONLY the code in [file]. Rules: Do NOT remove any features or change behavior. Do NOT fix bugs unless I ask. ONLY reduce complexity, extract helpers, and improve readability. Show me the plan before editing.

On the Horizon

Your 91 sessions reveal a power user leveraging parallel agents and structured workflows — the next leap is making Claude autonomously own entire pipelines from PR review to deployment-ready code.

Autonomous Test-Driven Bug Fix Pipelines
With 12 bug fix sessions and 16 instances of buggy code friction, you can shift to a workflow where Claude autonomously reproduces bugs via failing tests, iterates fixes until green, and commits — no babysitting needed. Your slack-agent TDD session (70/71 tasks) proved this works at scale; make it the default for every bug.
Getting started: Use the Agent tool to spawn a sub-agent that runs tests in a loop, and set up a CLAUDE.md instruction that enforces write-test-first for all bug fixes.
Paste into Claude Code:
I have a bug: [describe bug]. Your workflow: 1) Search the codebase to understand the relevant code. 2) Write a failing test that reproduces the bug exactly. 3) Run the test to confirm it fails. 4) Implement the minimal fix. 5) Run the full test suite in a loop until ALL tests pass, iterating on your fix if needed. 6) Run the linter and fix any issues. 7) Commit with a conventional commit message. Do NOT ask me questions — make reasonable assumptions and proceed autonomously.
Parallel Multi-Agent PR Review Standard
Your parallel review sessions (519 files reviewed, critical bugs caught) are your highest-value workflow. Standardize this into a one-command pipeline that splits reviews by concern — security, performance, correctness, style — then auto-posts a consolidated report to GitLab with severity rankings. This eliminates the interrupted/restarted review sessions you experienced.
Getting started: Build a custom slash command or CLAUDE.md skill that spawns 4+ parallel Agent calls, each with a focused review persona, and consolidates results before posting to your GitLab MR API.
Paste into Claude Code:
Review MR !{number}. Spawn these parallel review agents: 1) SECURITY agent: check for exposed secrets, auth bypasses, injection risks. 2) CORRECTNESS agent: verify logic, edge cases, error handling. 3) ARCHITECTURE agent: assess coupling, abstraction quality, and consistency with existing patterns. 4) DX agent: check naming, TypeScript types, dead code, and linting. Each agent should output findings as {severity: critical|warning|nit, file, line, issue, suggestion}. Consolidate all findings into a single structured report sorted by severity. Post the report as a GitLab MR comment using the API. Skip nits if there are more than 5 critical/warning items.
Figma-to-MR Autonomous Design Implementation
You already prototyped a designer-dev skill for Figma-to-MR workflows. The next step is a fully autonomous pipeline: Claude reads a Figma URL from an artifact, generates components matching your existing TypeScript/CSS patterns, runs visual regression or snapshot tests, and opens the MR — turning a 2-hour designer-developer handoff into a 10-minute autonomous flow. This directly addresses your 6 UI fix sessions and the friction around Claude not finding the right components.
Getting started: Combine your artifact workflow (which worked well in the tequila rename session) with a CLAUDE.md skill that includes your component library conventions, Biome config, and a structured task checklist.
Paste into Claude Code:
Implement the UI changes from this Figma design: [URL or description]. Workflow: 1) Read AGENTS.md and the relevant skill docs to understand conventions. 2) Search the codebase for existing similar components and reuse patterns/tokens. 3) Create a task checklist as an artifact. 4) Implement each component in TypeScript with proper types, using existing CSS variables and design tokens — do NOT introduce new colors or spacing values. 5) After each file, run Biome lint and fix issues immediately. 6) Run the full test suite and fix any failures. 7) Verify all checklist items are done. 8) Commit with a descriptive message and push to a new branch. 9) Open a GitLab MR with a summary of visual changes made and components affected.
"User asked Claude to find a specific meme about "stages of AI coding" they saw on social media — Claude searched extensively but every platform blocked it"
In a delightfully human moment, a user remembered a funny meme about the stages of AI-assisted coding and asked Claude to track it down. Claude gamely tried multiple search engines and social media platforms, but bots were blocked at every turn. The meme remained at large.