Running a performance review cycle is one of the most consequential processes an HR team manages. Done well, it gives every employee a clear picture of how they are performing, where they should grow, and what advancement looks like. Done poorly, it wastes weeks of manager time, generates ratings no one trusts, and leaves employees more confused and disengaged than before.
This guide walks you through every stage — from the initial scoping decisions to sharing results and closing the loop. Whether you are running your organization's first formal review cycle or overhauling a process that has grown unwieldy, the same seven steps apply.
Step 1: Define goals and review format
Before you configure anything in a system or send a single calendar invite, you need to answer the foundational questions that determine the shape of your entire cycle:
- What is the review cycle's primary purpose? Informing compensation, driving development, building calibration data, or some combination? The answer affects everything from question design to how results are shared.
- What frequency will you run? Annual reviews are thorough but suffer from recency bias. Quarterly reviews reduce bias but require more manager time. Most mature organizations run at least two cycles per year.
- What review types will you include? A typical modern review cycle includes self-assessments, manager reviews, and optionally 360 feedback from peers and direct reports. Adding all three significantly increases data quality but also increases the time burden on employees.
- Who is in scope? Will all employees participate, or only those past their 90-day ramp period? Will managers be reviewed by their direct reports?
Write these decisions down and share them with your HR leadership team before moving forward. Ambiguity here creates confusion at every subsequent stage.
Step 2: Set up your competency framework
A competency framework is the backbone of any review cycle that produces ratings people trust. Without it, "meets expectations" means different things to different managers — which makes calibration impossible and career conversations useless.
If your organization does not yet have a competency framework, now is the time to build one. See the guide How to Build a Competency Framework for a step-by-step process. If you have one, review it before launching the cycle:
- Do the behavioral indicators still reflect how the work is actually done?
- Are there new roles or functions that are not covered?
- Have any level boundaries shifted based on last cycle's calibration discussions?
Updating the framework before launch — rather than mid-cycle — prevents the confusion that comes from changing the rules while the game is being played.
Step 3: Configure the review timeline
A clear, published timeline is one of the highest-leverage things you can do for review quality. When people know exactly when each phase opens and closes, completion rates improve and last-minute rushes decrease.
A typical timeline for a 6-week cycle might look like:
- Week 1: Kick-off communications and self-assessment launch.
- Weeks 2–3: Self-assessment window (10–14 days gives enough time without letting it drag).
- Weeks 3–4: Manager review window (overlaps slightly with self-assessment close to allow managers to start early).
- Week 5: Calibration session(s).
- Week 6: Result sharing and 1:1 conversations with employees.
Build in buffer time — not all managers will meet every deadline, and chasing stragglers is an inevitable part of cycle management. Set intermediate reminder dates and decide in advance what happens if a manager misses the window (extension, escalation, or lock-out).
Step 4: Launch self-assessments
Self-assessments serve multiple purposes simultaneously: they give employees structured time to reflect on their own performance, they surface information managers may not have direct visibility into, and they significantly improve the quality of the review conversation by reducing the asymmetry of information between manager and employee.
When launching self-assessments, communicate clearly:
- What the self-assessment covers (which competencies, which goals, which time period).
- Who will see the completed self-assessment (typically the direct manager, and HR for record-keeping).
- The deadline and what happens if they miss it.
- How the self-assessment will be used in the overall rating process.
Send a reminder 3–4 days before the self-assessment window closes. Completion rates for self-assessments are typically lower than for manager reviews, and early reminders close the gap.
Step 5: Conduct manager reviews
Manager reviews are the core of the cycle. This is where individual performance is documented, rated, and narrated — creating the official record that informs compensation, promotion decisions, and development planning.
Brief managers on quality expectations before the window opens. Specifically:
- Require specific examples. Ratings supported by "I think they are doing well" are not calibratable. Every rating should be backed by at least one concrete behavioral example tied to the competency being assessed.
- Encourage review of notes, not just memory. Managers who have kept running notes on their direct reports throughout the year will write significantly better reviews than those relying on memory alone. Promote ongoing documentation as a year-round habit.
- Remind them calibration follows. Knowing that ratings will be reviewed with peers encourages managers to be honest rather than defaulting to leniency.
Step 6: Run a calibration session
Performance calibration is the step most organizations skip or underinvest in — and the one that has the most impact on whether the review cycle produces fair, trustworthy outcomes.
Structure the calibration session as follows:
- Prepare a calibration view. Before the session, compile all ratings into a single view organized by level, team, and rating distribution. This makes outliers visible at a glance.
- Start with the extremes. Focus discussion on employees rated at the top and bottom of the distribution — these are the most consequential and the most likely to surface inconsistencies.
- Require evidence, not advocacy. Managers should present the behavioral evidence for their ratings, not campaign for outcomes. "Here is what I observed and what the framework criteria say" — not "I really believe in this person."
- Document the outcomes. Record the final calibrated ratings and any key discussion points. This creates an audit trail and gives managers context to share with employees during result conversations.
Step 7: Share results and set next-cycle goals
The review cycle is not complete until every employee has had a 1:1 conversation with their manager to discuss the results. This conversation is the most important moment of the entire cycle — it is where data becomes development.
Train managers to structure this conversation in three parts:
- Share and explain the ratings. Walk through each competency rating with specific supporting examples. The goal is for the employee to understand why they received the rating they did, not just what the number is.
- Identify 1–2 key development areas. Based on the ratings and any 360 feedback, agree on the most important areas to focus on in the next period. These should be specific and tied to the behavioral criteria in the framework.
- Update or create the Individual Development Plan. Close every review conversation with a documented Individual Development Plan that both parties have agreed on. This is the forward-looking commitment that makes the cycle more than a backward-looking report card.
After all result conversations are complete, conduct a brief retrospective on the cycle itself: what worked, what created the most friction, and what to change for next time. A performance review process that does not improve is one that will eventually stop being trusted.