Skip to main content

Results and calibration

This guide is for owners and leaders turning review submissions into final decisions.

Your goal here is consistency and trust: equal standards, clear rationale, and outcomes people can understand without detective work.

Access and visibility

Open results from Performance > Reviews > Manage / Results using View Results on the run you want.

You can also open Run Overview and choose Open full results.

Visibility rules:

  • owners can access the full run
  • designated leaders see only their reporting-line slice
  • reviewers and review subjects do not use this operating view

This lets you broaden reporting access without giving everyone full cycle-control permissions.

Use a four-pass decision workflow

Pass 1: Check run health in Overview

Start in Overview to confirm whether the run is decision-ready.

Look at:

  • completion coverage
  • timing/progress bottlenecks
  • stage-level lag patterns
  • overall key metrics and timeline signals

If coverage is thin, finalization quality drops no matter how good your calibration model is.

If too many responses are still missing, the cleanest calibration in the world will still produce a weak outcome.

Pass 2: Align scoring in Calibration

Use Calibration for scoring consistency when your cycle includes calibratable numeric inputs.

Calibration in ClarityLoop lets you:

  • choose which scoring questions are included in the aggregate score
  • set a score threshold to flag lower-scoring cases
  • optionally flag based on a Yes/No decision question (for example, if answer is “No”)
  • view flagged vs missing-data populations
  • inspect score distribution and drill into individual review packets

Questions are calibratable when they support comparable numeric interpretation, for example:

  • rating questions
  • yes/no questions
  • single-choice questions with numeric option values

This is where cross-team interpretation gets aligned before you finalize. It is especially useful when several managers or panelists are applying the same scale in slightly different ways.

Pass 3: Review person-level evidence in Reviews

Use Reviews tab for case-by-case decision confidence.

You can filter by:

  • reporting line
  • department
  • subject name
  • whether a person has submitted responses

Then inspect full stage submissions with attached context.

If response quality is weak, use quality controls directly:

  • request changes on submitted responses
  • approve reviewer reopen requests when legitimate corrections are needed

Pass 4: Finalize and communicate outcomes

Owners can finalize:

  • one person at a time with tailored summary
  • multiple people in bulk when one rationale applies to the selected group

Finalization is a meaningful step, not just a label change.

When a decision is finalized:

  • participants are notified
  • the decision summary becomes visible
  • review answers become visible to participants in the review flow
  • questions with restricted visibility remain restricted

So finalize only when the wording is ready and the rationale is defensible.

A practical operating pattern by persona

Owners and HR/People teams

Use results to:

  • monitor run health during the live cycle
  • drive calibration discussions with comparable evidence
  • resolve weak or incomplete responses
  • finalize with a clear summary record

Leaders

Use results to:

  • inspect patterns in your reporting line
  • compare outcomes with shared standards
  • identify where follow-up coaching or talent discussion is needed

Managers

Managers are usually contributors to review content first, and consumers of finalized outcomes second. If you are also an owner for the cycle, use the results view carefully and consistently across people, not only for the loudest cases.

Export and governance workflows

Use Export All Insights or Export Filtered Insights when you need offline analysis, HR reporting, or audit-friendly record keeping.

A practical governance cadence:

  1. monitor run health while cycle is active
  2. calibrate before final decision meetings
  3. finalize with concise, evidence-backed summaries
  4. export and compare trends across cycles

This keeps Reviews useful as an operating system for performance decisions, not just an end-of-cycle task.

Common mistakes at the results stage

  • calibrating before enough responses are in
  • treating low completion as a minor detail
  • finalizing without checking whether restricted-visibility questions change the picture
  • writing vague final summaries that explain nothing later
  • using bulk finalization when the underlying rationale is not actually shared

FAQs

Can leaders see the entire company in results?

No. Leaders only see the slice tied to their reporting line unless they are also owners.

Who can finalize a review outcome?

Owners finalize. Leaders can inspect results, but finalization is an owner workflow.

Do all questions contribute to calibration?

No. Calibration is designed for comparable numeric-style questions, not every question in the cycle.

What happens after finalization?

Participants are notified, the decision summary is visible, and submitted review answers become visible within the review flow except where question-level visibility rules keep them restricted.

Next steps: