Scroll down to register for free to Culture Amp’s amazing new webinar series, hosted by Culture Amp’s Senior People Scientists.
I am biased, you are biased, all humans are biased. Not buying it yet? Consider the research of Daniel Kahneman, a psychologist who won the Nobel Prize in economics. Kahneman demonstrated one simple truth: the vast majority of human decisions are based on biases, beliefs, and intuition, not facts or logic.
This is part of why, even with the best intentions, people have the tendency to bring bias into the performance review process. Bias is an error in judgment that happens when a person allows their conscious or unconscious prejudice to affect the evaluation of another person. When it comes to performance reviews, this matters greatly.
Biases can lead to inflation or deflation of employee ratings, which can have serious implications in high-stakes situations like hiring and performance reviews.
So, what can companies do to ensure their performance review processes are as bias-free as possible? Incorporate bias blockers into each step of the process. Here we cover 10 of the most common biases that affect performance reviews, and how you can prevent them from skewing performance evaluations.
1. Recency bias
When reviewing an employee’s performance, managers tend to focus on the most recent time period instead of the total time period.
You can also call this the “What have you done for me lately?” bias. If someone recently rocked a presentation or flubbed a deal, that recent performance is going to loom larger in a manager’s mind. Why? Because it’s easier to remember things that happened recently.
So how do we help others and ourselves overcome this bias? It’s important to document performance at different points in time throughout the time period. Did someone just complete a 3-month project? Great, send their peers a request for feedback so you can get some data on how well they did. Did someone just complete internal training? Awesome, request feedback from the instructor about their participation. This way, at the end of the year, you have more frequent data points from throughout the entire time period.
2. Primacy Bias
When reviewing employee performance, managers focus on information learned early on in the relationship, like first impressions.
This is why first impressions count so much. According to Dr. Heidi Grant Halvorson of Columbia Business school. If I’m a jerk to you when we first meet, and I buy you a coffee the next day to make up for it, you are going to see that nice gesture as some sort of manipulative tactic and think, “This jerk thinks he can buy me off with a coffee.” However, if I make a great first impression, and buy you a coffee the next day, then you’re likely to see it as an act of goodwill and think to yourself, “Wow, that Kevin really is a nice guy.”
This is very similar to preventing recency bias. By putting together a dossier of performance snapshots that include feedback from multiple points in time, you can dampen managers’ tendency to weight their first impressions more heavily .
3. Halo/Horns Effect Bias
Allowing one good or bad trait to overshadow others, i.e. letting an employee’s congenial sense of humor override their poor communication skills.
Establishing an Organization Guidance System
Dave Ulrich, Norm Smallwood and Alan Todd break down Organization Guidance Systems - what they are, and why they are essential to HR's role in busines...
2020: HRD Thought Leaders on the biggest trends of...
Dave Ulrich, Jill Christensen, Jon Ingham, Katrina Collier and more HRD Thought Leaders predict the trials and transformations that will face the work...
HRD Summit UK 2020 - Sneak Peek
With the HRD Summit 2020 fast approaching, HRD Connect takes a look at what to expect at this year’s landmark event. View article
Amanda Cusdin, Sage: The Big Conversation and real...
In this week's HRD Live Podcast, Amanda Cusdin, Chief People Officer, Sage, sat down Michael Hocking, Editor, HRD Connect, to discuss Sage's mammoth c...
HRD Best of 2019: Culture and Engagement
As 2019 comes to a close, we look back at the top 10 culture and engagement articles, podcasts and interviews of the year. View article
Do You Possess the Top Two Most In-Demand Skills?
Jill Christensen, Employee Engagement Expert, Best-Selling Author and HRD Thought Leader, breaks down the two most important skills in the workplace, ...
We all have our own pet peeves and turn-ons. Sometimes those quirks can overshadow our ability to assess people overall. This is why attractive people are much more likely to be rated as trustworthy. However, if/when they fail to live up to those higher expectations, attractive people also suffer a penalty for not living up to the presumptions of others.
Make sure to evaluate performance on multiple dimensions of performance instead of leaving it open to interpretation. Are you rating individual achievement, but failing to look at the way people contribute to the success of others? Does this person happen to have a particular set of highly sought after technical skills, but they don’t finish their work on time? Make sure at least 2-3 different aspects of performance to get a holistic view so that one awesome or awful trait or skill doesn’t overshadow everything else.
4. Centrality/Central Tendency Bias
The tendency to rate most items in the middle of a rating scale.
Have you ever had a manager that gave everyone a 3 out of 5, just because they were reluctant to be extreme? Sometimes people are wary to give very high or very low scores just because they see themselves as middle-of-the-road types. While moderation is great in most things, for high-stakes situations like performance reviews, we usually need people to take a stand.
It is important to make sure you take a flexible approach to the way scales are designed. For your managers, centrality bias might not be an issue, but if it is, eliminating the neutral option is one approach that might work. This one is as easy as eliminating the middle option so that evaluators have to make a choice one way or the other.
5. Leniency Bias
Leniency bias occurs when managers give favorable ratings even though they have employees with notable room for improvement.
This is where everyone gets a 4 or 5 out of 5, even though not everyone deserves it. And this makes some sense because everyone thinks their team members are above average. After all, that’s why we keep them around, right? But in reality, some people outperform others.
Instead of making above average the top possible rating, try using a rating scale that reflects the way people actually talk about and think about their team members. If you want to create more spread in order to identify your top people, build that spread into the rating labels.
For example, you could have a scale where the top rating is above average.
However, if you wanted to give managers giving more opportunities to identify stellar performers, you could create a scale with above average as the middle rating and top performer as the top rating.
6. Similar-to-Me Bias
The inclination to give a higher rating to people with similar interests, skills and backgrounds as the person doing the rating.
Simply put, we tend to like people that are like us. In addition to making performance reviews tricky, this can make your workplace feel less inclusive.
Require specificity in manager’s assessments. In three separate studies, Yale Researchers found that when you first agree to the criteria used in an assessment and then you make the evaluation, you are less likely to rely on stereotypes and your assessments are less biased.
7. Idiosyncratic Rater Bias
When managers evaluate skills they’re not good at, they rate others higher. Conversely, they rate others lower in things they’re great at.
In other words, managers weight their performance evaluations toward personal eccentricities.
In fact, one of the largest studies on feedback found that more than half of the variance associated with ratings had more to do with the quirks of the person giving the rating than the person being rated. Rater bias was the biggest predictor. It held more weight than actual performance, the performance dimension being rated, the rater’s perspective, and even measurement error.
It’s not easy for people to rate others on things like “lateral and strategic thinking” (whatever that means). But, as one researcher put it: “People might not be reliable raters of others, but they are reliable raters of their own intentions.” So consider rewriting some of your performance questions to be about the actual decisions and intentions of your team.
Here are some examples from the Culture Amp platform:
- I would always want this person on my team
- I would award this person the highest possible compensation increase and bonus
- I would hire this person again
- This person is ready for promotion today
- If this person resigned, I would do anything to retain him/her
8. Confirmation Bias
The tendency to search for or interpret new information in a way that confirms a person’s preexisting beliefs.
This is a lot like primacy bias, but it can tend to go much deeper. Have you ever had a question about something and went to the internet to search for the answer? If you’re like most people, your search terms are probably going to search for things that will only confirm your existing beliefs.
For instance, if you love beans and want to prevent cancer, you might Google “beans help fight cancer.” But, on the other hand, if you can’t stand beans, you might search for “beans cause cancer.” Sure enough, you will find millions of results for both searches.
Similarly, if you initially think someone might be a bad apple, you are much more likely to seek out (and find) information that confirms your initial suspicion.
Think like a scientist. When researchers ask questions, they try to form their hypothesis in ways that seeks to disconfirm rather than confirm their initial beliefs. Every time you have an impression about someone, go out and seek evidence that they are the opposite or entirely different from what you suspect. When collecting feedback from others, pay close attention to the feedback that goes against your beliefs.
9. Gender Bias
When giving feedback, individuals tend to focus more on the personality and attitudes of women. Contrarily, they focus more on behaviors and accomplishments of men. This exacerbates gender bias, growth/promotion opportunities, and the pay gap.
Culture Amp’s own research by our Senior Data Scientist, Priya Sundararajan, reviewed 25,000 peer feedback statements across a performance cycle of nearly 1,500 employees. She found that, “Peer feedback provided by both male and female reviewers tends to focus equally on work- and personality-phrasing for male employees (for example, ‘Nick should gain more technical expertise in nonparametric ML models’) where female employees are nearly 1.4x more likely to receive personality phrases from male reviewers (such as ‘Sue is a great team player and very easy to work with’) and less likely to receive work-related phrases”.
Sometimes unstructured feedback allow bias to creep in. So, without some criteria, people will redefine the criteria for success in their own image. The big takeaway, as Stanford researchers have put it, is that open boxes on feedback forms make feedback open to bias. That’s why it helps to take a “mad libs” approach to feedback – help raters by giving them a format and then allowing them to fill in the blanks. Nudge managers into specifically talking about situations, behaviors, and impacts rather than personality or style.
10. Law of Small Numbers Bias
The incorrect belief that a small sample closely shares the properties of the underlying population.
For instance, you might have a stellar team full of top performers, with one person that is doing the work of four others. Naturally, you rate that person as higher than the rest and the others a bit lower. Unfortunately, however, it turns out that even the lowest performer on your team is among the best in the whole company. So, when it comes time to look at performance company-wide, it appears as if your team is about average.
This is why it is important to do talent calibrations. This is where all of the reviews and all of the ratings are looked at holistically to make sure that when you rate someone as above average, your above average is similar to everyone else’s definition of above average. This ensures that we are all speaking the same language and using the same nomenclature.
What can you do to recognize your own biases?
Unfortunately, we’re not that good at knowing our own biases. In fact, research has suggested that the more help you need in this area, the harder it is to recognize that you need help. People underestimate their own bias and the most biased among us underestimate it the most.
So, one step is to check yourself through some unbiased means. One method researchers at the University of Washington, University of Virginia, Harvard University, and Yale University have used is the Implicit Association Test –it’s freely available to everyone. Fair warning though: you might not be comfortable or agree with the results, but that’s probably just your bias talking.
Next, give yourself permission to be human and recognize the limits of our own understanding. Just making yourself aware of your biases will not, in and of itself, enable you to overcome your biases. This doesn’t mean that we ignore our biases or give into them. Instead, we need to set up systems, processes, procedures, and even technology, that enable us to make better decisions. Ask for help. Get feedback from others. Set firm criteria and be consistent. Most of all, keep an open mind.
Register for free to Culture Amp’s webinar series, hosted by Culture Amp’s Senior People Scientists. Webinar topics include:
Follow the links to register for free to Culture Amp’s webinar series now.