Why the Quality Listening program Should Not be a Performance Review By: Colin Taylor Let’s look at the numbers. In a customer service call center where the quality assurance program requires the evaluation of 4 calls per month. The average agent will handle approximately 1,600 calls in the month. This means that the 4 calls evaluated represent only a quarter of a single percentage point. Or put another way we are evaluating and assessing only one out of each 400! How representative was the second Tuesday of August? To employ that Tuesday in August of last year as being representative of the past fifteen months likely doesn’t make sense. Neither does basing an opinion of an agent’s performance on every 400th call. No mater how we try to examine these individual call assessments, the sample size is just too small to have meaning. This is the fundamental problem with attempting to employ quality assurance scores as mini-performance reviews. Attempting to use your quality reviews as a performance assessment tool misses the primary objective of quality management. Quality assurance is about assuring the quality of the service being delivered. To who is this assurance being made? The answer is to senior management. The practice of assessing quality allows center management to gauge the performance of the center and individual agents within the center. The value to the center and senior managers in knowing the relative performance of the center and comparing and contrasting the performance with previous months is significant. But perhaps the ability to identify how individual agents are performing is more valuable. By knowing where agents are at can help us direct our efforts to improve the overall performance and quality of the center. The objective isn’t just to identify problems and what agents are doing wrong, but also to identify what they are doing well. Both the areas for improvement; through coaching leading to improved individual performance and sharing best practices improves the overall performance of the center. Performance reviews have a place and time, and that is your regularly scheduled performance review. The agent’s individual performance reviews may play a small role here specifically related to improvement over time as their skills improved. Remember that a failure of an agent to improve or be able to overcome performance deficiencies is as much a censure of the coaching and skills development staff, recruiting and staff selection and processes as it is of the agent in question. The correct positioning of the Quality program, its strengths and weaknesses, function and goals is key to gaining a well functioning center. This positioning needs to be known by both the senior management but also the agents. So that each can recognize their contributions and how all can help with the centers success.
I originally published this post and received a number of comments questioning our perspective. Recently we received a comment from Karyn Dupree ( @KarynDu ) which presented the Hawthorne Effect as a rationale for small sample monitoring. This was quite thought provoking and I have added Karyn’s Comment and our reply below. Let me know your thoughts and comments on the post and the points of view raised. Thank you.
I found your article quite interesting; however, I disagree with the concept that only a percentage of the calls being monitored is not significant in the big picture. The point of the agents knowing that they are being randomly recorded and listened to is what is the big differentiator. This is called the Hawthorn Effect.
The Hawthorne Effect – According to wikipedia.org, the Hawthorne Effect is defined as: “An experimental effect in the direction expected but not for the reason expected; i.e., a significant positive effect that turns out to have no causal basis in the theoretical motivation for the intervention, but is apparently due to the effect on the participants of knowing themselves to be studied in connection with the outcomes measured.”
This is why it is important to listen and monitor calls. Unfortunately, you cannot listen to all “1600″ calls, but a percentage can statistically represent the overall behavior of the agent. As experts in Quality Monitoring, we actually recommend no less than 2 audits per week per agent. This really paints a good picture of the overall behavior…be it good or unacceptable. This also provides opportunities for coaching and praising certain behaviors.
Thank you for all your great blogs and helping us all to achieve great Customer Satisfaction and Loyalty!
Thank you so much for your comment, we appreciate you input. You cite the Hawthorne Effect as giving validity to an otherwise statistically insignificant sampling process. While I respect your point of view I would challenge you on the validity of the conclusion.
While the Hawthorne effect does suggest that observed individuals behave or perform better than unsupervised individuals, for a limited time, as long as they suspect or know about the observation. The effect however diminishes over time. To state that performance improves while being observed, does not in and of itself speak to whether the standard deviation narrowed. If the standard deviation, i.e. the range of quality from best call to worst call, remained unchanged then even if the Hawthorne effect remained in force over time the variance is unchanged. So any improvement seen is understood in the context of where the performance is at that time and the Hawthorne effect is thereby negated. This would be a case of ‘high tide raising all boats’ rather than evidence that statistical validity being rendered irrelevant.
Other research even questions the utility and applicability of the Hawthorn Effect. Roethlisberger found that greater productivity resulted when management made workers feel valued and aware that their concerns were taken seriously. On the surface this point of view is well aligned with the original findings of the Hawthorne (experimental) Effect. The new aspect introduced by Roethlisberger is that the effect occurs as a result of ‘vested self interest’. The effect depends on the participants’ interpretation of the situation; it is not awareness per se, nor special attention per se, but participants’ interpretation that must be investigated (Adair).
So even if the Hawthorne Effect is real, and even if the effect did not erode back to zero over time, if the participants or call center agents do not perceive and interpret the additional attention of monitoring and coaching to be their best interests then no effect occurs. This is another reason to discard the ‘catch them doing something bad’ model for Quality Assurance.
But that is not the only concern I have with this logic. Individual agents may make positive adjustments due to monitoring and coaching due to the attention, or so we hope. Those adjustments may be long lived or short lived depending upon the agent and the coaching involved. What is important and the original point in the article is that Quality Assurance (QA) programs are designed to identify the quality of the service provided. QA programs are not to be meant as a performance review every week. In many cases where a QA program is put in place performance does rise and this rise is maybe due to the Hawthorne effect, as you stated, a novelty effect or the agents perception of benefit in doing so. This does not account for the entire rise. With the staff aware that supervisors and monitors are paying attention there is likely greater concentration to do what has been asked. This is especially so for those whom positive coaching and encouragement is provided.
However the purpose of the QA program is to report what the QA is. Hence ‘Assurance’ part of the title. Centers often use the title of Quality Assurance Program and mean Quality Monitoring (QM) or Quality Listening Program. A QM program is usually part of an overall QA program. A well designed QM can and usual does provide fast and effective (we hope) feedback to agents in order to improve how they perform the functions they’ve been ask to do. This feedback loop, and as stated above, the positivity of this loop is critical, especially in the early stages of an agents career in a call center. (For more on this subject see: Talent is Overrated by Geoff Colvin). Weekly monitoring and coaching can quickly become a form of performance review.
A full Quality Assurance program should and usually does take a broader view of the environment. A single agent and a small sample of calls are fine for QM, performance improvement and coaching exercises of that agent. From a senior managers point of view it is more important to be able to view the entire system, all or groups of agents, all calls and call types or large sub categories, and the overall trend and control levels in order to better manage the center or programs.
So to reiterate our original premise Quality Assurance should not be a performance review. We would be happy to carry on this dialogue or to speak directly to discuss this matter further. Thank you once again comments.
Read the entire post and all the comments here
- callcenterperspectives posted this