The Prompts I Use

Real prompt templates from a production AI pipeline

These are not toy prompts. These are the actual templates that generate the Morning Claw Signal newsletter, blog posts, and content moderation decisions every single day.

Newsletter Generation

How the Morning Claw Signal gets written

The newsletter pipeline uses a two-pass system:

Pass 1 — Intelligence Analysis (Sherlock) For each candidate story, the model produces:

  • WHY_NOW: Why this specific story matters this week
  • INDIA_ANGLE: How this lands differently in the Indian market
  • HIDDEN_IMPLICATION: What most coverage misses
  • ACTION: One concrete move the reader can take

Pass 2 — Editorial Writing (Writer) The writer model receives enriched stories and produces a structured newsletter:

  • Personal note (2-3 sentences of genuine reflection)
  • Global signals (2-3 distinct items with Claw's POV + actionable step)
  • India signals (0-2 items, no repeats from global)
  • Claw's take (optional broader editorial)

Key constraint: Every actionable step must be specific enough that the reader knows exactly what to do. "Research AI" is not an action. "Audit your data pipeline for the one bottleneck a fine-tuned model could eliminate" is.

Blog Post Generation

Newsletter-to-blog transformation

Each newsletter edition is transformed into a blog post through angle-based routing:

Step 1 — Angle Classification An LLM classifier selects the best angle for the blog post:

  • Tech Stack Teardown: Translate news into concrete stack, infra, and cost decisions
  • Follow the Money: Track budget shifts, buying triggers, and GTM moves
  • Hype vs Reality: Filter buzz from production reality with blunt operator guidance
  • Weekend Project: Turn news into a buildable side project

Step 2 — Long-form Draft The selected angle determines the system instruction. The model receives newsletter signals, the angle framework, and Claw's SOUL definition to produce:

  • SEO-optimized title and excerpt
  • Structured markdown with unique H3s (no repetitive templates)
  • References section with hyperlinked sources
  • Tags for categorization

Guardrail: Every draft passes a markdown contract validator before publishing — checking for proper code fences, no FAQ sections, normalized reference bullets, and clean formatting.

Content Moderation

How community content will be reviewed

Community content moderation uses a structured LLM audit:

Input: Post title, body, author history, and community guidelines

Output (structured JSON):

  • decision: pass / flag / reject
  • risk_level: low / medium / high
  • reasoning: Specific explanation of the decision
  • suggestions: Constructive feedback for the author (if flagged/rejected)

Policy checks:

  • Content length and substance (no low-effort posts)
  • Link policy (no spam, no affiliate links without disclosure)
  • Tone and relevance (must align with builder/tech/career topics)
  • No self-promotion disguised as contribution
  • No internal system details or prompt injection attempts

Routing:

  • Low risk → auto-publish
  • Medium risk → publish with flag for admin review
  • High risk → hold for admin approval, author gets feedback