Selection and Staffing I Personnel Selection Learning objectives
Selection and Staffing I: Personnel Selection
Learning objectives l l l Be able to define selection/staffing Understand the purposes of recruitment Understand the purpose of realistic job previews Know the basics of selection (the more advanced stuff is next week) Know the 4 basic selection system validation designs
Staffing Decisions l l Decisions that are associated with recruiting, selecting, promoting, and terminating employees Today we’re going to focus on recruiting and selection
Recruitment: The Short Course l Recruitment l l Recruiting, on the cheap l l Cardboard sign advertising the opening in our store window Recruiting, NOT at all on the cheap l l The art and science of attracting people to apply for the openings in our organization 30 second add during the Superbowl (Join OUR Team!) Outsourced recruiting: Monster. com, Careerharmony. com, temp agencies, etc.
Four goals l l Increase the size of your applicant pool; if we have a bigger pool, we can usually make better choices Increase the proportion of applicants with the necessary KSAs for the job Increase other characteristics we might want in our applicant pool, such as diversity Convince desirable applicants that your organization is a great place to work
Some nice effects l l l Job candidates infer a lot about organizations from recruiter behavior Other material (pamphlets, web sites) influence applicant perceptions of the organization and their intent to apply Recruiting does more than “get the word out” about the organization/job. l It influences how your potential applicants view the organization, and thus their willingness to apply for, and eventually, to accept the job
One nice recruiting strategy l Realistic Job Previews (RJPs) l l Show people what the job is really like, not an over-optimistic portrayal This lets applicants make good choices about the organization l E. g. , when they don’t fit the organization, they won’t take the job, saving the organization the cost of selection, training, and turnover
RJPs l Premack & Wanous (1985) meta-analysis of 21 RJPs l l l Lower initial job expectations Increased self-selection Increased commitment to the organization Increased job satisfaction Increased job performance Decreased turnover
Personnel Selection Basics l l l People differ in the quantity and the quality of the work they will produce We want to hire people who will produce a lot of nearly perfect work (ideally) We need to know something about people to base a decision on, otherwise, we’re really just picking people at random Selection is gathering that information about people, and using it to make the hiring decision Generally, we want the information that has the most to say about how people will perform
More Basics l Need to make a distinction between the measurement method (selection instrument or tool) and what is actually measured l l Example: The Wonderlic is the instrument used to collect information, the construct being measured is cognitive ability Measure the constructs in lots of ways (formal tests, interviews, biodata)
Selection systems l l l The information we gather, the instruments we use to gather it, and the method we use to combine the information to make the hiring decision are collectively our “selection system” Deciding what information we want about applicants is critical We want: l l Maximal predictive validity, good face validity Unbiasedness Job relevance Inexpensive & quick to gather
Examples l The NFL Combine l l l Wonderlic Personnel Test 40 meter dash Bench Press Etc UIUC Psychology Ph. D program l l l GRE Quantitative Score GRE Verbal Score Undergrad GPA
Where to start? l Ideally, with a local job analysis that determines the KSAs needed for the relevant job tasks l l Then we know what to measure Other places that it might be okay to start: l l l Non-local job analysis, e. g. , the O*NET Other previous research Off the shelf instruments (generally cheap)
Where not to start! l l l “Firm handshake” test “If you could be any kind of tree, what kind of tree would you be? ” Graphology, horoscopes, and crystal balls Useless tests (The Non-Cognitive Questionnaire, Myers-Briggs) Legally indefensible measures (MMPI)
VALIDITY l Those methods are all bad because they have little to no PREDICTIVE VALIDITY l l We can quantify this in a lot of ways l l We want to use instruments that have a strong relationship to future job performance Number or percent of correct vs. incorrect hires Expectancy tables (e. g. , at this test score, we expect this performance Average job performance of the selected group Remember, though that job performance is multidimensional. We’re simplifying to the case where it is treated as unitary
Establishing validity l l l Frequently, the simplest way of representing this kind of validity is with a correlation coefficient (e. g. , correlation between scores on the measure, and scores on job performance measure) When we choose a selection system, we want to be sure that it works (establish criterion-related validity) 4 ways of doing this, generally: l l Concurrent Retrospective (or postdictive) Quasi-predictive Predictive
Concurrent Designs l l l Take a sample of job incumbents, and give them the new selection test. Correlate their scores on the test with their current job performance Positive correlation provides some evidence that the test predicts job performance
Concurrent Designs: Strengths & Weaknesses l l l Convenient: We’ve already got the workers Low risk: Not making hiring decisions with it, so no risk of bad hires Unrealistic: job incumbents may behave differently than applicants since they l l l Already know the job Are unmotivated (have nothing to lose) Not predictive: data gathered at same time. Which thing causes the other?
Retrospective Design l l Give the test to a bunch of job incumbents again Have old info about their job performance Correlate the two A positive correlation provides some evidence for a relationship between the test and job performance
Retrospective Designs: Strengths & Weaknesses l l l Convenient: still have the sample of incumbents, AND already have the job performance data Low risk: Not making any potentially costly decisions Nearly meaningless: We have no way of knowing how people have changed since their job performance was measured. Don’t know if past job performance might be causing current scores on the selection test Might be a good pilot test
Quasi-Predictive Design l l We use the new selection system to make our hiring decisions Later, we evaluate the performance of the new hires If there is a positive correlation between the two, it is evidence that the selection instrument is predictive of performance Note: We have NO performance info about the people we did NOT hire
Quasi-predictive Designs: Strengths & Weaknesses l l Potential gains: If the system works, we start getting its benefits right away Practical: not wasting our job incumbents’ time taking tests they don’t need Possible criterion contamination: We know how the new hires did on the test, might bias our performance ratings Risk: If the system doesn’t work, we might wind up with a lot of bad employees (and maybe a lawsuit!)
Predictive Designs l l We administer the selection instrument to a group of applicants We don’t use it to make our hiring decisions (ideally, we would select randomly) Later, we measure job performance A positive correlation is strong evidence that our selection system is predictive of job performance
Predictive Designs: Strengths & Weaknesses l l Powerful: We can make strong claims for the predictive power of our selection system with this method Low risk: If the system doesn’t work, or worse, is biased, we haven’t used it to make any decisions, so no harm no foul Opportunity cost: If it works, we’ve missed out on all the good hires we could have made Not practical: Organizations hire psychologists to help them make decisions now, not sit on data for months or years
Which design to use? l l Well, a lot will be determined by the situation As scientists, we’d like to use the predictive design all the time, since it provides us with the strongest evidence Oftentimes, we’ll have to compromise and use one of the other designs All and all, if you’ve based your system on a good job analysis, the quasi-predictive design is probably best balance of power and practicality
- Slides: 25