- July 27, 2006
- Posted by: Dave Kurlan
- Category: Understanding the Sales Force
One end user assessed only 6 of their people. They cherry picked – their 3 best and 3 worst. They wanted to internally validate the results against some existing people that they knew. They also assessed those six using a popular behavioral styles test which is not sales specific. They said that the behavioral styles test pegged these people perfectly. But in this case, ‘pegged’ means the assessments described the people; their tendencies and behaviors, how they were perceived as people, but not how they would perform in the field or whether they should have been hired.
The results from our assessments were different. The three worst were identified as people they shouldn’t have hired. No question. Terribly weak and unqualified for a sales position at this company. Of the three stronger people, all three appeared much stronger on their assessments than the three weakest however, only one would have been recommended.
The client wondered that since the six were pegged accurately (as people) if the other assessment could be more helpful. However, they would have to find a way to translate their assessment as people to sales performance AND draw a conclusion. In addition, the other assessment would help them find candidates that were similar to these people – NOT similar in the way they approach sales!
Our assessment would identify candidates who were all as good as the strongest of those three – AND BETTER. The client would not have to draw a conclusion because our sales specific assessment makes the recommendation for them. Ours also has a predictive validity of 95%. Statistics show that when a company hires a candidate who was not recommended there is a 75% chance that the candidate will fail. Those are strong odds. The odds are pretty strong in support of following a positive recommendation as well. 95%.
Which would you rather rely on?