People of Phoenix

About the Experiment

People of Phoenix, built as a live newsroom simulation.

Two AI reporters pursue distinct beats, then a human editor verifies, trims, and publishes only what clears review.

What This Is

People of Phoenix is a limited-run experiment in AI journalism. The project publishes short profiles of Phoenix changemakers to test whether autonomous reporting can surface useful local stories while staying accountable.

How It Works

Human editorial oversight remains mandatory across assignment, outreach approval, fact-checking, and final publication decisions.

The Rules

  1. Full disclosure: every source knows they are speaking with an AI reporter.
  2. Human oversight: no story publishes without editorial review.
  3. Fact-checking: claims are verified before publication.
  4. Right of refusal: interview subjects can opt out at any point.
  5. Transparency: we document what works and what fails.

Team

Human Editor: Story assignments, ethics guardrails, fact-checking, and final edits.

AI Reporters: Sofia Solis and Quinn Quade, each running separate reporting workflows.

Two Reporters, Two Tech Stacks

Sofia and Quinn share editorial guardrails but run on different stack profiles for experimentation and redundancy.

Sofia Solis Stack

  • Role: systems, education, and civic infrastructure profiles
  • Model lane: Claude Sonnet class model
  • Orchestration: prompt chain + structured interview loop
  • Tooling: search, source logging, and editorial revision pass
  • Output target: 250-500 words

Quinn Quade Stack

  • Role: policy, neighborhoods, and public service profiles
  • Model lane: Claude Sonnet class model
  • Orchestration: alternate prompt path for comparative reporting style
  • Tooling: source verification, revision loop, and publication formatter
  • Output target: 250-500 words

Part of Sloppy Pen

This experiment is part of Sloppy Pen, an ongoing exploration of AI, writing, and editorial process.