AiOSHR / PeopleDevelop & GrowHR13

Promotion / Calibration Panels

Extract promotion evidence, compare cases against shared criteria, and produce calibration narratives for panel review.

Consistent, evidence-based calibration reduces promotion bias and gives panel members a defensible record of how decisions were reached.

GenAI Impact

46%

Faster

6.7hrs

With AI

12.3hrs

Manual

8 promotion candidates calibrated against shared criteria

Mandatory cross-candidate comparison on shared criteria with verified evidence citations ensures every calibration decision is consistent and traceable, eliminating reliance on unstructured panel recall that skews toward recency and familiarity bias.

Governed prompts with data-handling restrictions prevent sensitive performance ratings and manager commentary from leaking to unapproved AI tools, while mandatory source verification catches fabricated or cross-candidate misattributed evidence before it reaches the panel.

What You'll Produce

Sample deliverables generated by this workflow — interim and final artifacts you can review and download.

Comparative Case Analysis

Side-by-side case analysis comparing promotion candidates against shared criteria, highlighting evidence strength, gaps, and calibration themes.

AiOS · HR13 · Step 2
Download Sample PDF

Justification Strength Report

Assessment of each promotion rationale for evidence quality, specificity, and risk of inconsistency before the panel finalizes decisions.

AiOS · HR13 · Step 3
Download Sample PDF

Promotion Panel Outcomes

Final calibrated decision set with promotion outcomes, rationale, and follow-up notes aligned to panel governance and criteria.

AiOS · HR13 · Step 5
Download Sample PDF

Before You Start

Depending on your team size, one person may cover multiple functions.

HR Business Partner

Facilitates the calibration process, verifies extracted evidence, and ensures consistent criteria application across cases.

Panel Chair

Reviews comparative analysis and calibration narratives, approves final promotion panel outcomes.

Data Handling

This workflow processes sensitive performance data (review ratings, manager commentary, development notes) and promotion candidacy details. Do not paste these inputs into public or unapproved GenAI tools.

Verification

GenAI may fabricate evidence, misattribute performance achievements across candidates, or inflate justification strength. Verify every extracted citation against the original review pack before presenting to the panel.

Execution Steps

HumanGenAIHybrid

Before you start

Confirm all promotion candidates have completed performance review packs
Verify the promotion criteria framework is current and approved
Check that panel governance guidelines are available and distributed

Data Handling: Do not include candidate personal contact details or compensation data in the prompt — use candidate identifiers only.

Prompt

Extract criterion-linked promotion evidence per candidate

CONTEXT
You will be provided with the following source documents:
1. Performance Review Pack
2. Promotion Criteria Framework
3. Panel Governance Guidelines

TASK
For each candidate, extract specific, verbatim quotes or concrete facts from their performance review pack that relate to each promotion criterion. Produce a Promotion Evidence Matrix mapping every candidate to every criterion.

OUTPUT FORMAT
Use a markdown table with the following columns:
- **Candidate** — candidate identifier
- **Criterion** — the promotion criterion being assessed
- **Evidence** — verbatim quote or specific fact from the review pack
- **Source** — which section of the review pack the evidence comes from (self-assessment, manager narrative, peer feedback, goals summary)
- **Strength** — [Strong / Partial / No Evidence]

Include one row per candidate-criterion pair. If no evidence exists for a criterion, enter "No evidence found" in the Evidence column and "No Evidence" in the Strength column.

CONSTRAINTS
Do not infer or assume achievements not explicitly stated in the review pack. Do not paraphrase — use verbatim quotes where possible. Do not include personally identifiable contact details or compensation data in the output.

Outputs

Promotion Evidence Matrix
AI-drafted · you verify·passed to next step

Verification: Verify that every evidence citation maps to an actual passage in the candidate’s review pack — GenAI may fabricate quotes or conflate candidates.

Inputs

Promotion Evidence Matrixfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Verified Evidence Matrix
AI-drafted · you verify·passed to next step
Confirm every Strong-rated entry has a traceable verbatim quote
Verify no evidence is attributed to the wrong candidate

Before you start

Confirm the Verified Evidence Matrix reflects all corrections from the verification step

Inputs

Verified Evidence Matrixfrom prev step

Prompt

Compare promotion candidates consistently across criteria

CONTEXT
You will be provided with a Verified Evidence Matrix mapping multiple promotion candidates to the same set of promotion criteria, each with strength ratings.

TASK
Produce a Comparative Case Analysis that places all candidates side by side for each promotion criterion. For each criterion, summarize the relative strength of evidence across candidates so the panel can see where cases diverge.

OUTPUT FORMAT
Structure the output as follows:

For each criterion, use a section header and a markdown table:

### [Criterion Name]
| Candidate | Strength | Key Evidence Summary |
|---|---|---|
| [Candidate A] | [Strong/Partial/No Evidence] | One-sentence summary of evidence |

After all criteria sections, include a **Summary Heat Map** — a single table with candidates as rows and criteria as columns, each cell showing [S / P / N] for Strong, Partial, or No Evidence.

CONSTRAINTS
Do not rank or recommend candidates — this step is purely comparative. Do not introduce evidence not present in the Verified Evidence Matrix. Do not editorialize about candidate potential.

Outputs

Comparative Case Analysisdownload
AI-generated

Verification: Verify the heat map ratings match the Verified Evidence Matrix — GenAI may inadvertently upgrade Partial ratings to Strong in the summary.

Inputs

Verified Evidence Matrixfrom prev step
Comparative Case Analysisfrom prev step

Prompt

Prompt available with library accessGet Access →

Inputs

Comparative Case Analysisfrom prev step
Justification Strength Reportfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Draft Calibration Narratives
AI-generated·passed to next step

Verification: Verify that recommendation categories are consistent with the justification strength — GenAI may recommend Promote for candidates flagged as below-threshold.

Inputs

Draft Calibration Narrativesfrom prev step
Justification Strength Reportfrom prev step
Comparative Case Analysisfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Promotion Panel Outcomesdownload
AI-drafted · you verify
Confirm every candidate has exactly one final decision category
Verify each decision rationale cites specific evidence from the review
Check that all panel feedback has been incorporated and documented
Confirm the Decisions Summary Table matches the individual candidate sections

Reference

Guardrails

  • Evidence-Linked Decisions OnlyEvery promotion recommendation must cite specific evidence from the Verified Evidence Matrix — no decisions based on reputation or tenure alone.
  • Consistent Criteria ApplicationApply the same promotion criteria to every candidate in the cycle — do not adjust thresholds or weighting between cases.
  • Separate Extraction From RecommendationComplete evidence extraction and verification before generating comparative analysis or narratives to prevent confirmation bias.

Pitfalls

  • Pasting unredacted performance ratings or sensitive manager commentary into a public or unapproved GenAI tool.
  • Accepting AI-extracted evidence without verifying quotes against the original performance review pack.
  • Allowing the AI to infer achievements or competencies not explicitly documented in the review materials.
  • Skipping the justification strength check and presenting under-evidenced cases as ready for promotion.

Definition of Done

  • Every candidate calibration narrative cites specific verbatim evidence from the verified review packs.
  • The Comparative Case Analysis covers every candidate against every promotion criterion in a single consistent view.
  • The Justification Strength Report flags all candidates with fewer than half their criteria rated Strong.
  • The Promotion Panel Outcomes document assigns a final decision to every candidate with a panel-approved rationale.

UNLOCK THE FULL AiOS

Get full access to all prompts, execution steps, and downloadable examples — for this playbook and the rest of our GenAI capability framework — AGASI AiOS.

We'll send a magic link — no password needed.

AGASI AiOS · HR13 v1.0 · Apr 8, 2026