Sales Interviews or Tests: More Than Meets the Eye
Share
Many organizations incorrectly assess salespeople

Almost every sales applicant experiences interviews and tests. Is this good or bad? It's hard to tell. Although it would take dynamite to separate most hiring managers from their favorite test, few organizations have conducted studies showing whether its scores predict performance.

High production takes more than just "selling the pencil." It takes a combination of skills and motivation. How can an organization use this information to identify sales winners? Let's dissect the human element of the sales hiring process piece by piece.

Social Desirability Issues

"Social desirability" means that instead of being brutally honest, applicants say things that sound good. For example:

Interviewer (seeking potentially negative information): "What is the one thing you would like to change about yourself?"
Applicant (out-flanking the question): "Umm…I probably work too hard…and…I should take better care of my health."

"Working hard" and "caring about health" are examples of socially desirable answers. They sound good, but only a green interviewer would take them at face value. Social desirability also affects scores on pencil and paper tests.

Sales-ability tests claim to predict sales success by asking sales-like questions, then using the answers to predict prospecting, servicing, cross-selling, and so forth. Pencil-and-paper sales tests are trustworthy tools, right? Well, think again.

Many sales tests were developed and scored based on salespeople already in the job. That can be a big problem. Why? Employed salespeople and sales applicants tend to answer differently. When the same test is later used to hire applicants, organizations are faced with a dilemma: does a high score indicate a true winner, a socially desirable winner-wannabe, or someone totally out of touch with reality?

Selective Memory Issues

Human brains are not tape recorders. Tape recorders are simple machines: what goes in, comes out. Assuming, of course, that someone remembered to install new bunny-batteries.

Hiring managers have motives and emotions that color memory and affect judgment. For example, although most sales departments are staffed with a full range of people from superstars to super-duds, most hiring managers vividly recall a time they hired an applicant who failed/passed a sales test and the employee succeeded/failed. Wrong-headed memory almost always outweighs rational judgment.

For example, I once worked with a C-level manager who was completely convinced the Watson-Glaser predicted job success. I showed him a study written by the W-G publisher showing test scores did not vary appreciably from a high-level job to a low-level job, where one or two questions could shift an applicant's score as much as 20 percentile-points. Although we hired only applicants with high W-G scores, our salespeople still showed considerable productivity differences.

No reaction. Mr. C-Level probably had some old wadded-up WG tests stuffed into his ears. My powerful argument, backed with thoughtful studies, facts, and figures, received the same reaction as beans an hour after a BBQ. I gave up. Selective memory trumped rational argument.

Situational Judgment Tests

Sometimes people will do a thorough job of interviewing subject-matter experts and use that information to put together a collection of situations and possible responses. For example, say you are serving customers standing in line to order the newest fast-food sandwich, the HABBIE (Heart Attack on a Bun). An eager-to-die customer pushes in front of the line.

Do you: a) pretend nothing happened; b) serve two HABBIEs out of spite; 3) politely suggest someone else was ahead of them; d) take your break; or, e) beat them senseless with a Jedi light-saber from a kiddie-meal?

Sure, you say this is an easy question. But wait. Is this really a no-brainer? Is it semantics? Could a savvy applicant guess the right answer (a false positive)? Or, does the applicant have the ability to quickly learn (a false negative)?

Situational-judgment tests tend to screen out people who could have been successful if only given half a chance or who did not know the restaurant discontinued carrying light-sabers because eating them lead to an early death.

Measuring Maximum or Typical Performance?

"Maximum" performance refers to the BEST an applicant or employee can do. "Typical" performance is the average of a salesperson's day-by-day performance. What's the difference? Maximum performance is a function of skills (i.e., technical proficiency, intelligence, social skills, persuasive ability, and so forth); whereas, typical performance is largely a function of motivation.

Here is where it gets confusing. Overall performance equals maximum performance plus typical performance. I guess we are supposed to think that motivation is all it takes to succeed in a high-skill job. Professional athletes don't fall off the turnip truck. They have highly tuned physical skills that set them apart. Even in an off day, a professional athlete will outperform a highly motivated average citizen.

Maximum performance requires an employee to apply his or her full complement of skills to the job. Weak skills equal weak maximum performance. Strong skills equal strong maximum performance. Typical performance, on the other hand, depends on his or her motivation to use skills. Think of typical performance as meeting the employee's personal comfort level. Ability tests are good predictors of maximum performance.

But how do you know what you are measuring?

Situational and Behavioral Interviews

Situational interviews sound something like this, "What WOULD you do if…?" They are future-oriented and almost hypothetical. A behavioral interview looks backwards. It sounds something like, "What HAVE you done…? Behavioral interviews are experience-based.

By the way, don't make the mistake of thinking that a clever interview question is all you need to hire the best people. Accuracy depends substantially on interviewer training, job analysis data, multiple interviewers, and standardized scoring. Asking a structured interview question without this information is like giving a test without knowing what you want to measure or having an answer key.

So what kind of results do situational and behavioral interviews produce? Which interview technique is better? Which technique better predicts applicants' maximum performance? Typical performance?

Ute-Christine Klehe and Gary Latham attempted to address these questions in an article published in volume 19(4) of Human Performance (2006). The study is quite long, but here are some highlights:

  • The authors developed 16 equivalent behavioral and situational questions to assess team-player behaviors and interviewed 79 subjects using two trained interviewers using a standardized rating sheet.

  • Maximum (skill-related) performance was measured by peer observations recorded during an intensive five-day work project and typical (motivation-related) performance was measured by peer observations averaged over a four-month period.
Klehe and Latham found the situational interview and the behavioral interview were both weak predictors of maximum performance. That is, neither one was very good at predicting on-the-job skills. On the other hand, both techniques were about equal predicting day-by-day average performance.

Whole Job, Whole Person

We come full circle every time. The most effective way to evaluate performance is to use measurement tools that evaluate each requirement. In addition, we need to understand that there is a big difference between what we are measuring.

Are we measuring the best an applicant can do? Are we measuring an applicant average performance? Or, are we measuring whether applicants "look like us."

Are our tests being affected by social desirability (artificially high test scores that may have nothing to do with either maximum or typical performance)? Perhaps selective memory is affecting the tests and scores used to make hiring decisions? Or are we screening out too many qualified candidates?

Hiring is pretty straightforward when we understand it. We just have to step outside ourselves, understand what is happening, and make a concerted attempt to minimize human error.

----------
This article originally appeared on ERE.net:
http://www.ere.net/articles/db/8ECB55B2CADA457489EE678EFF10F799.asp
----------