Understanding participant experiences over the course of a 360 degree feedback process is key to effective 360 programs. In 20 years of honing best practices, we’ve developed a nuanced understanding of how participant experiences affect 360 data, and vice versa. In this series, you will benefit from what we’ve learned as we take you on a 360 degree tour of 360 degree feedback. We’ll run through the experience of the 360 from the perspective of each participant: the project sponsor, the HR manager, the feedback recipient, the direct report, the peer, and the boss. As we pan the view to take in each perspective, you’ll get a sense of what’s important to each player. Balancing these priorities is both the challenge and the key to success in implementing an effective, transformative 360 degree feedback program.
On the next stop on our tour, we’ll get to know the peer rater. The feedback recipient–Diego–is getting feedback from his boss, his peers, and his direct reports. In this way, the 360 degree feedback survey truly creates a full 360 degree view of Diego’s work behaviors: from above, below, and beside him in the workplace hierarchy. Even on a complicated organizational chart, it’s fairly obvious who will be boss and direct report raters. But who are the peer raters?
A peer rater is anyone whose feedback is worth including, and who isn’t a boss or direct report. A wide variety of workplace relationships fall into the peer rater category. Let’s take Diego for an example. His peer rater might be in the same workgroup or division, or not; might be at a different level of the organization, or not; might be an internal customer, or provide Diego with internal customer service. As you can see, peer raters can provide many perspectives from many different angles. It’s worth considering how peer raters are selected: check out this post on peer rater selection.
In our fictional company, Diego will get to choose his own peer raters. The HR manager in charge of this process wisely hosts a training in advance of the 360 that includes helpful information about how to choose raters (and reviews their choices, to ensure that each person’s results will provide useful data).
One of the peer raters Diego selects is his co-worker Vivian. They are in different divisions, but because of their respective roles they collaborate on a regular basis. Let’s take a closer look at Vivian’s experience, starting at the beginning of the process.
If Vivian was also participating in the 360 project as a feedback recipient, she would hear about it the same way Diego did. However, often peer raters only participate by giving feedback. In this case, Vivian might only receive an email from Diego (possibly through a third party) asking her to provide him with feedback in a survey. Without further instruction, she might do so with the same care she would take on an internet personality quiz. For best results, a company should consider rater training. This can be done in person or via webinar, or through an online training video. Answering rater questions is an important component of effective training.
One of Vivian’s primary concerns is whether her feedback will be connected with her name. The right answer is no: peer rater feedback in a 360 should absolutely be confidential. Anonymity is preserved by requiring a minimum number of raters per category. If it is not met, the data gets included in the overall score, but does not appear in the “peer rater” data column. Vivian feels reassured when the methods of preserving anonymity are described repeatedly, which is reflected in the interface of the survey tool and the fact that the survey is conducted by a third party (that’s us!).
What if Vivian refuses to participate? She might be busy and a poor time manager. She might not understand the point of the 360, the process itself, or be worried about how it will affect her. She might have a reasonable conflict or fear that prevents her from participating. While it can be acceptable for Vivian to decline, it’s better for everyone if she participates. This is where her managers and HR come in: they communicate a clear sense of purpose, give adequate instructions, send multiple invitations, and even provide appropriate incentives for filling out the survey.
No matter the incentive, it’s hard to get a 100% response rate. To compensate for the effect on the data, Diego’s company wisely provides him with a list that delineates raters who actually responded from people who were invited to do so. The more information, the better. If certain people are missing from his respondent list, he can make educated guesses about why his averages are high or low.
Vivian will appear in the respondent list, because she decides to participate. After she receives the invitation, she chooses a day and time on which she can thoughtfully consider Diego’s work. She’ll be asked to rate him on items like “leads by example”, “follows through on commitments”, and “shows respect for others”. As she does so, she will likely find that she can’t speak to some of them. Vivian can accurately rate Diego’s competencies regarding teamwork, communication, and customer service orientation–but she does not have enough information to feel confident in rating him on how well he understands the business or delegates. This is the most frequent experience for peer raters: the feeling that they don’t have enough information. As a result, Vivian might feel frustrated and resistant to the process. However, due to her rater training, Vivian knows that she can–and should–mark those survey items “not applicable.”
Taking the survey also makes Vivian reflect on her own behaviors: how she would rate herself, or how Diego might rate her. She finds herself comparing herself to Diego, which brings up feelings of inadequacy in some areas, and overblown pride in others.
The relationship the peer rater has with the feedback recipient determines the tenor of the feedback. If Vivian and Diego have a positive, collegial relationship, the ratings will be as high as Vivian can justify. If they have a negative, conflictory relationship, the ratings will be as low as she can justify.
There is a huge variety of attitudes the peer rater can have, but they can be loosely classified along a spectrum from competitive to collaborative. Some people seem to be playing a constant game of king of the hill, seeing colleagues as either threats or tools. Others seem to take the attitude that all boats rise together, seeing their colleagues as team members. Of course, these are extreme views. Most people’s behavior falls somewhere in between. Regardless, the rater’s attitude makes all the difference in how they bend their ratings. Those who fall on the competitive side will be more likely to provide low ratings, and those who fall on the collaborative side will be more likely to provide high ratings.
In fact, because of this dynamic, the lowest average ratings tend to come from peer raters. This is important to keep in mind, mostly because if Diego expects his peer ratings to be low, it helps mitigate any surprise or disappointment he might feel. Instead of seeing low ratings, Diego should look for which ones are the lowest and the highest.
Back to Vivian: she clicks “Finish” on the survey. The 360 process might end for her at that moment. Or, she might be moved to start a discussion with Diego, about insights she had about his behavior and their working relationship. Diego might also approach her as part of his follow-up, to thank her for her feedback and have a discussion about how he can change in response to the 360. One of the greatest benefits of 360 degree feedback is that it creates so many opportunities for useful workplace communication.
In sum, any peer rater’s participation is best promoted through clear communication and adequate training. She is concerned with anonymity, and her reaction to the survey process might range from resistant to eager, as it provokes intellectual and emotional reflection. Through all of this, HR can increase the likelihood that her feedback is useful by ensuring that she understands the 360: not just the process, but the purpose.
 Of course, workplace structures are as diverse as the organizations. Hierarchies can look incredibly different in different contexts. Any 360 degree feedback survey design worth its salt takes organizational structure into account.
 Some businesses might also want to include a rater category for customers. In fact, in the retail industry there is proven benefit to using customer raters.
 3D Group provides online rater trainings to clients, as well as in-person trainings–it’s a great way to keep costs low, but get excellent results.
 At 3D Group, we use a default of 3 raters, a number which we’ve validated as an effective minimum.
 Here is another thing that can be best communicated in a rater training.
 Yes, of course competition can be healthy and not in opposition to collaboration. However, we’re talking here of competition as an underlying disposition, rather than a behavior.
Enjoyed this post? Find more: