Post-Launch Review
Purpose: Evaluate project outcomes against goals and extract lessons that improve future work
How to run this meeting
Start by celebrating what worked — genuinely, not as a formality. Teams that skip directly to problems skip the signal embedded in their wins. What made something go well? Was it a specific practice, a good decision, a person's contribution, or favorable conditions? Understanding why something succeeded is just as important as understanding why something failed, and it's more likely to produce reusable insight.
When you move to what didn't work, hold the focus on systems and decisions rather than individuals. The question is not "who dropped the ball" but "what conditions made that outcome likely, and how do we change those conditions next time?" Ask what we would do differently if we started this project again tomorrow. Ground every lesson in a specific, documented example — "communication broke down" is not actionable; "we didn't have an agreed escalation path for scope changes, so the design team was making product decisions they weren't equipped to make" is.
Compare actual vs. planned across three dimensions: timeline, scope, and quality. Projects often finish on time by silently cutting scope or compromising quality — surfacing this explicitly is how you understand what the timeline actually cost. Invite participation from the full team, not just leads. The people closest to the work often have the clearest view of what went wrong and why.
Before the meeting
- Pull the original project brief, timeline, and goals so you can compare directly
- Gather launch metrics — what do you know so far about actual outcomes vs. goals?
- Ask all participants to write down 2-3 things that went well and 2-3 things they'd change, before the meeting
- Review action items from the project retrospective (if there was one) and check status
- Prepare a timeline comparison: planned vs. actual for each milestone
- Identify patterns across prior post-launch reviews to see if the same issues are recurring
Meeting Details
- Date:
- Facilitator:
- Attendees:
- Duration: 60–90 minutes (schedule 2–4 weeks post-launch, once initial data is available)
Goals vs. Results
Compare what you set out to achieve with what actually happened. Be honest — the value of this section is in the precision of the comparison, not in making the project look good.
| Goal | Target | Actual | Status |
|---|---|---|---|
| Vendor onboarding time | 5 days avg | 6.2 days avg | Partial — improving but not at target |
| Eliminate internal handoffs (standard types) | 0 handoffs | 0.4 avg (mostly edge cases) | Mostly achieved |
| Reduce onboarding support tickets | -40% | -28% | Partial |
| Timeline: Discovery complete | May 9 | May 9 | On time |
| Timeline: Design handoff | May 23 | May 29 | 6 days late |
| Timeline: Internal beta | June 20 | June 27 | 7 days late |
| Timeline: Full release | July 11 | July 18 | 7 days late |
Metrics
What data do you have on product performance since launch? Include leading indicators (adoption, errors, support volume) and lagging indicators (business outcomes) if available.
- Adoption: 847 vendors onboarded through new portal in first 3 weeks (vs. 200 projected for first month — ahead of plan)
- Error rate: 4.2% of onboarding flows hit an error state in week 1; down to 1.8% by week 3 after hotfix
- Support tickets: 28% reduction in onboarding-related tickets vs. same period last quarter
- Time to onboard: Average 6.2 days, with 60% of vendors completing in under 5 days; outliers are pulling the average up (complex category edge cases)
- NPS from vendors: 42 (n=180 responses) — notably higher than the previous portal NPS of 18
What Worked
Specific practices, decisions, or behaviors that contributed to a good outcome. Name the pattern, not just the instance — what made it repeatable?
- Weekly design-engineering sync: The recurring sync between Suki and Elena, established at kickoff, prevented the design-handoff delay from becoming worse. Both leads cited it as the most useful process change on this project.
- Migration spike in week 1: Doing the data migration spike early completely changed the risk profile of the project. We avoided a late-stage surprise that would have pushed the timeline by 3+ weeks.
- Pre-read before status meetings: Adoption of the written pre-read meant status meetings stayed at 30 minutes. In past projects, status meetings routinely ran 60+ minutes.
- Beta vendor selection process: Choosing 5 vendors who represented different edge cases (not just easy ones) surfaced 3 bugs before full release that would have been high-severity incidents.
What Didn't Work
Be specific. What would you change if you started again tomorrow? Ground each item in a concrete example from this project.
- Legal review wasn't started early enough: The data retention decision sat open for 11 days and put the release at risk. We knew legal was on the critical path from the kickoff, but didn't create a concrete trigger for starting the review. Next time: legal review starts at spec sign-off, not at dev complete.
- Design buffer wasn't in the plan: The original timeline had no slack in the design phase. When the 3-day delay happened, it compressed everything downstream. Build explicit buffer into design-to-engineering handoff milestones.
- Edge cases in vendor categories weren't scoped: The "complex vendor" category edge cases weren't well-defined in the spec, and they're the reason we're above our onboarding time target. We shipped knowing this was undefined — we should have either scoped it or explicitly deferred it.
- Success metrics weren't instrumented at launch: We didn't have monitoring in place to track onboarding time and support ticket volume from day one. First week of data was manual. Set up metric dashboards before, not after, launch.
Lessons Learned
Distilled, reusable principles from this project. These should be specific enough to actually change what you do next time.
- Legal dependencies need a hard start date, not a trigger event. If a review is on the critical path, put it in the project plan with a calendar date, not "when X is ready."
- Design phases need buffer by default. Propose 15% schedule buffer in all future design phases and protect it explicitly.
- Instrument success metrics before launch. Add "metrics dashboard live" as a launch criteria checklist item in the project kickoff template.
- Scope the edge cases or defer them explicitly. Leaving category edge cases undefined creates hidden scope that affects real outcomes. Force the decision: in scope, out of scope, or future milestone.
- Vendor onboarding complexity scales with category type. Build a complexity score into the onboarding flow so complex vendors are routed to a different path — this is the fix for the 6.2-day average.
Action Items
| Owner | Action | Due Date | Status |
|---|---|---|---|
| @marcus.webb | Add "legal review start date" field to project kickoff template | 2025-08-01 | Open |
| @marcus.webb | Add "metrics dashboard live" to standard launch checklist | 2025-08-01 | Open |
| @elena.voss | Investigate vendor complexity routing to bring onboarding average to <5 days | 2025-08-15 | Open |
| @raj.patel | Share lessons learned doc with all PMs and tech leads | 2025-07-25 | Open |
| @suki.park | Document the design-engineering sync format as a reusable process doc | 2025-08-01 | Open |
Follow-up
Distribute the written post-launch review to all project participants and key stakeholders within 48 hours. Store it in a searchable location alongside the original project brief — future teams should be able to find and learn from it. Review lessons learned at the kickoff of the next similar project, not just when things go wrong. The compounding value of post-launch reviews comes from actually using them.
Skip the template
Let Stoa capture it automatically.
In Stoa, the AI agent listens to your project post-launch review and captures decisions, drafts artifacts, and tracks open questions in real time — no note-taking required.
Create your first Space — free