Skip to main content

Documentation Index

Fetch the complete documentation index at: https://kindling.birklid.com/llms.txt

Use this file to discover all available pages before exploring further.

ArticleApplications Layer

I Got Tired of Claude Agreeing With Everything, So I Fixed It

A practical Reddit thread on Claude sycophancy — how one user addressed it and what the responses reveal about prompting strategies for more honest AI output.

What it covers

This r/ClaudeAI thread captures a common frustration with large language models: the tendency to agree with user premises rather than push back on flawed reasoning. The original poster describes a concrete fix they implemented and the thread became a reference point for prompting strategies that reduce sycophancy — including system prompt approaches, framing techniques, and task structuring methods that encourage genuine disagreement and critique from Claude. The community responses are the real value: a range of practitioners sharing what’s worked for them across different use cases, from code review to strategic analysis to creative feedback.
Read this when you’re using Claude for tasks where honest critique matters — code review, strategic planning, argument evaluation — and you’re noticing it agrees more than it should.

Read the thread

Read on Reddit ↗

Original thread on r/ClaudeAI.

Karpathy Skills

CLAUDE.md plugin that addresses related LLM failure modes.

Matt Pocock's Skills

Skills for grounding Claude in docs and enforcing disciplined responses.