Multiple updates since last revision

Last updated Jun 26, 2025

Building a Great Prompt

The Challenge: Weekly Team Productivity Report

Let's watch a prompt evolve from a vague idea to a precise specification. Our goal: create an agent that generates weekly team productivity reports by combining data from Google Calendar, Notion project databases, and distributing results via Slack.

TLDR: Great prompts evolve through iteration.
Start with a simple goal, then progressively add specificity.
Continue with exact requirements, output format details.
Consider edge case handling.
Each iteration reduces ambiguity and increases reliability

Iteration 1: The Initial Experiment

Most users start with something like this:

❌ Version 1: Basic Experiment

Problems with this prompt:

  • "Productive" is undefined - what metrics matter?

  • No data sources specified - calendar? projects? tasks?

  • No output format defined - where does the report go?

  • "Team" is ambiguous - which people or departments?

Iteration 2: Adding Specificity

Now we add specific data sources and basic metrics:

🔶 Version 2: More Specific


Improvements: Specific team, data sources, output destination. Still problematic: "Our Notion workspace" is vague, no format specified, "completed tasks" undefined.

Iteration 3: Exact Details and Structure

Now we add precise data sources, calculation methods, and output formatting:

🟡 Version 3: Detailed Specification

Create weekly Product Team report every Monday at 9 AM.

Check:
- Google Calendar: "Product Team Calendar" 
  • Count meetings with >2 attendees from Product Team
  • Exclude 1:1s and personal events
  • Calculate total hours vs focus time

- Notion Database: "Sprint Tasks" (ID: abc123)
  • Tasks with Status = "Done" 
  • Assigned to: @alice, @bob, @charlie, @diana
  • Count by priority (P0, P1, P2)

Output:
📊 Product Team Weekly Summary
Meeting Analysis: [X] hours meetings, [Y] hours focus
Tasks: P0: [#] | P1: [#] | P2: [#] | Points: [#]

Major improvements: Specific names/IDs, clear criteria, exact format. Still missing: Error handling, edge cases, validation.

Iteration 4: Heavy Specification

The final version includes comprehensive error handling, edge cases, and quality assurance. Here's the complete specification broken into logical sections:

✅ Part 1: Core Configuration


✅ Part 2: Data Collection Rules


✅ Part 3: Output Format

📊 Product Team Weekly Summary | Week of [MONDAY_DATE]

🤝 Meeting Analysis:
• Total team meeting time: [X.X] hours
• Average per person: [X.X] hours/week  
• Focus time ratio: [XX]% (target: >70%)

✅ Task Delivery:
• P0 Critical: [#] completed / [#] planned ([XX]%)
• P1 High: [#] completed / [#] planned ([XX]%)
• P2 Medium: [#] completed / [#] planned ([XX]%)
• Story points: [#] ([XX]% of sprint goal)

📈 Insights:
• Most productive day: [DAY] ([#] tasks)
• Sprint progress: [ON_TRACK/BEHIND/AHEAD]

✅ Part 4: Error Handling & Validation

ERROR RESPONSES:
• Calendar fail: "⚠️ Calendar unavailable - using previous average"
• Notion empty: "⚠️ No tasks found - verify sprint setup"
• Missing member: "⚠️ Data missing for [NAME]

The Evolution Pattern

Notice how each iteration systematically eliminated ambiguity:

  • V1 → V2: Added specific team, data sources, and destination

  • V2 → V3: Defined exact criteria, calculations, and output format

  • V3 → V4: Added comprehensive error handling and validation

The difference between V1 and V4 is the difference between an agent that might work sometimes and an agent that works reliably in production across all your integrated platforms. Every great prompt follows this evolution from experiment to production-ready specification.

ON THIS PAGE

© 2025. All rights reserved. Incredible.one

Manage Cookies

© 2025. All rights reserved. Incredible.one

Manage Cookies

© 2025. All rights reserved. Incredible.one

Manage Cookies