Skip to main content
Execution Reviews|QA / Test Review
Execution Reviews

QA / Test Review

Evaluate testing results.

QA & Test Review

Purpose: Assess test results and make a clear recommendation on release readiness

How to run this meeting

Lead with risk, not with numbers. A 98% pass rate sounds impressive until you learn the 2% failures are in the payment flow. Open the meeting by answering the question every stakeholder is actually asking: "Is it safe to ship?" Then support that answer with data. A QA review that buries the lead — walking through all the passes before getting to the critical failures — wastes everyone's time and buries important signal.

Categorize every issue by severity before the meeting. Use a consistent rubric: P0 (data loss, security vulnerability, crash on critical path), P1 (core workflow broken, no workaround), P2 (significant friction, workaround exists), P3 (cosmetic, low-frequency edge case). Severity determines whether an issue blocks release or gets tracked as a known issue. Don't let the group relitigate severity in the meeting — that should be pre-assigned by QA and only challenged if there's new information.

Make a clear ship / no-ship recommendation. "It depends" is not a QA outcome. QA owns the recommendation; the PM or engineering lead owns the final decision. If QA recommends no-ship, document what would need to be fixed to change that recommendation. This keeps the conversation productive even when the answer is "not yet."

Before the meeting

  • QA completes all test cases and logs results in the issue tracker before the meeting
  • All issues are triaged and assigned a severity (P0–P3) before the meeting
  • Prepare a written recommendation (ship / no-ship / ship with conditions) with rationale
  • Pull regression test results alongside new feature test results
  • Identify any areas where test coverage is incomplete and document the untested risk

Meeting Details

  • Date:
  • Facilitator:
  • Attendees:
  • Duration: 30–45 minutes

Test Scope

Define what was tested, on what platforms/environments, and what was explicitly out of scope. Incomplete test coverage is not a secret — state it clearly.

Feature: Scheduled Reports — admin scheduling UI, report generation, and email delivery

Environments tested: Staging (latest deploy as of 2024-12-10), Chrome 120, Firefox 121, Safari 17; mobile web (iOS Safari 17) for email output only

Test types run: Manual functional testing, regression suite (automated, 847 cases), email rendering across 8 clients (Litmus), load test (500 concurrent schedules)

Explicitly not tested:

  • Outlook 2016 desktop client (no license available — documented risk)
  • Schedules with >5,000 recipients (exceeds current customer max)
  • Report generation with workspaces > 2M rows (load test topped out at 1.5M)

Pass / Fail Summary

High-level test results. Keep this brief — the issues section is where the detail lives.

Test areaCases runPassedFailedSkipped
Scheduling UI424020
Report generation313010
Email delivery & rendering484440
Regression suite84784160
Load test (500 schedules)Pass
Total968955130

Overall pass rate: 98.7% — but see Major Issues for severity context.


Major Issues

List all P0 and P1 issues. Include a brief description, reproduction steps, and current status. These are the issues that determine ship/no-ship.

P0 — None identified

P1: Report not generated when workspace timezone is stored as UTC offset (not IANA string)

  • Affected: ~15% of workspaces based on data audit
  • Symptom: Job enqueued successfully but report generation silently fails; no email sent, no error surfaced to admin
  • Status: Fix in PR #1189, in code review
  • Issue #1201

P1: "Last 30 days" date range off by 1 day for workspaces in UTC-offset timezones

  • Affected: Same ~15% of workspaces as above
  • Symptom: Report covers 29 days instead of 30
  • Status: Fix included in same PR #1189
  • Issue #1202

P2 issues (4 total): Minor UI misalignments in Firefox, incorrect pluralization in confirmation copy, email subject line truncated in Gmail mobile app at >60 characters. Full list in → QA tracker


Risk Areas

Identify areas of elevated risk even where tests passed, particularly where coverage was incomplete.

  • Outlook 2016: Not tested. If any enterprise customers use Outlook 2016, email rendering is unknown. Recommend asking CS to check customer email client data before launch.
  • Large workspace performance: Load test validated 500 concurrent schedules and 1.5M row reports. Behavior beyond these limits is untested. Monitor p95 report generation time closely at launch.
  • Error surfacing: Silent failure mode for the timezone P1 is a pattern risk — if this can happen for timezone issues, it may happen for other report generation errors. Recommend adding a general error notification to the admin before launch.

Release Recommendation

State a clear recommendation. QA makes the recommendation; PM/eng lead makes the final call.

Recommendation: No-ship until P1 issues are resolved

Both P1 issues affect the same ~15% of workspaces and share a single fix (PR #1189). Once #1189 is merged and the two P1 test cases re-run successfully, the recommendation changes to Ship with monitoring — with CS notified to watch for timezone-related reports issues in the first week post-launch.

P2 issues do not block release and should be tracked for the next patch.


Action Items

OwnerActionDue DateStatus
@backendMerge and deploy PR #1189 (timezone fix)2024-12-11Open
@qaRe-run P1 test cases after PR #1189 merges2024-12-11Open
@priyaAsk CS to check customer Outlook 2016 usage before launch2024-12-11Open
@backendAdd error notification to admin for failed report generation2024-12-13Open
@qaLog P2 issues in backlog for next patch2024-12-11Open

Follow-up

QA updates the release recommendation in writing once P1 fixes are validated. The PM communicates the updated status to stakeholders and confirms the launch date. If the recommendation changes to Ship, QA posts a final sign-off comment in the release tracking issue. Known P2 issues and untested risk areas should be handed off to the on-call engineer at launch.

Skip the template

Let Stoa capture it automatically.

In Stoa, the AI agent listens to your qa / test review and captures decisions, drafts artifacts, and tracks open questions in real time — no note-taking required.

Create your first Space — free