Personnel Psych Exam 3

Labor Market

A geographical area within which the forces of supply (people looking for a job) interact with the forces of demand (employers looking for people) and thereby determine the price of labor

Internal Labor Market

Bringing employees into entry-level positions and then promoting them up to positions with more responsibility (e.g., manager)

Trait Activation

personality traits are expressed in response to situational cues

Screening

-Hiring procedures that are used to weed out candidates that would certainly not be a good fit for a job
-This term is also used to describe any rapid or rough hiring process

selection

The hiring procedures used to select particular individuals from a pool of qualified applicants

Overt integrity tests

Assess attitudes about theft and other dishonest behavior or asks for admissions of such behavior

personality-based integrity tests

-Assess aspects of personality (e.g., conscientiousness) or thought patterns that are likely to lead to dishonest behavior
-Can be used to predict a wide range of counterproductive or dishonest behavior

Overt integrity tests assume people with low integrity will:

-Actually report more dishonest behavior
-Attempt to justify their own dishonest behavior
-Believe others display similar amounts of dishonest behavior to their own
-Be more impulsive and less considerate

Experience-based vs. situational questions (interviews)

-Experience-based questions ask, "Can you tell me about a time when . . . ?"
-Situational questions ask, "What would you do if . . . ?"
-Interviewer ratings for situational questions tend to have higher reliability
-Experienced-based interviews tend to ha

Tests vs. inventories

Refer to procedures that have correct/incorrect answers; inventories don't

Mechanical combinations of data in predictors

Data-combination strategies are mechanical (or statistical) if:
-Individuals are assessed on some instrument
-They are assigned scores based on that assessment
-The scores subsequently are correlated with a criterion measures

Judgmental combinations of data in predictors

Predictions are judgmental (or clinical) if:
-A set score or impressions must be combined subjectively in order to forecast criterion

Compensatory prediction models

- assumes that high scores on one predictor can substitute or compensate for low scores on another predictor

noncompensatory prediction models

low scores (below cutoff) on a predictor cannot be counteracted by high scores in other predictors

Relative costs of different recruiting sources

-Private employment and executive search agencies are generally the most expensive
-Advertising and Internet responses, write-ins, and internal transfers and promotions are next highest
-Employee referrals, direct applications (mail or Web based), and wal

Time lapses: what are the consequences of time lapses, how can they be reduced

delays between the phases of recruitment are perceived very negatively by candidates, especially those of high quality
time lapses can be reduced by technology ( posting jobs online and processing online applications is faster , saves more time in screeni

Yield ratios; calculating recruitment needs based on ratios

-The ratios of leads to invites, invites to interviews, interviews (and other selection instruments) to offers, and offers to hires
-These are usually based on past data (if it's available)
-From these, you can start with the number of hires you'll need a

Yield ratios; calculating recruitment needs based on ratios pt. 2

-If no past data on recruitment exists, use educated guesses for yield ratios
>Lean towards overestimation
-A similar process can be done with time lapse data (the time between each phase of recruitment) to estimate how long it will take from the point an

Recruitment of diverse applicants

-Minority applicants consistently use formal recruitment sources rather than informal ones
-Informal sources such as employee referrals can work to the employer's advantage if the workforce is comprised of members from different gender, racial, and ethnic

The applicant perspective in the recruitment process

Most applicants:
-Have an incomplete and/or inaccurate understanding of what a job opening involves
-Are not sure what they want from a position
-Do not have much self-insight with regard to their knowledge, skills, and abilities
-Cannot accurately predic

Applicant perspective part 2.

Work environment and organizational image influence applicants' attraction to a company
-Applicants prefer decentralized organizations
-Performance-based pay is preferred over seniority-based pay
-Organizational image may be enhanced by simply providing m

Realistic Job Previews (RJPs)

All organizations try to make themselves seem like a good place to work, which inflates expectations
>Provide a realistic view of what it is like to work for an organization
>Should be conducted to reduce pessimistic expectations and overly optimistic exp

The RIASEC vocational interests model categories

A commonly used personality-based vocational taxonomy was developed by John Holland
-Realistic
-Investigative
-Artistic
-Social
-Enterprising
-Conventional

The basic rationale behind the majority of selection procedures

is that past behavior is the best predictor of future behavior

Recommendations/reference checks: information obtainable

Four types of information obtainable:
1. Employment and educational history (including confirmation of degree and class standing or GPA)
2, Evaluation of the applicant's character, personality, and interpersonal competence
3, Evaluation of the applicant's

Recommendations/reference checks; issues with them

Recommendation letters are often seen as having little value
-The average validity of letters is .14
-Recommendations rarely include unfavorable information, which does not help when trying to discriminate between applicants
-Characteristics of the writer

Weighted application blanks

-Weights can be assigned to different items in an application form based on their predictive power
-The resulting regression equation can be used to form a composite, allowing for a single cutoff score

Validity issues with polygraph testing

-Its accuracy for screening purposes is almost certainly lower than what can be achieved by specific-incident polygraph tests
-The physiological indicators measured by it can be altered by conscious effort, either through cognitive or physical means
-Usin

Issues with integrity tests

Overt integrity tests assume people with low integrity will:
-Actually report more dishonest behavior
-Attempt to justify their own dishonest behavior
-Believe others display similar amounts of dishonest behavior to their own
-Be more impulsive and less c

The relative value of training (education) vs. experience

Statistical methods for evaluating work experience can have high validities (M = .45)
A survey of 3,000 employers revealed experience is valued more than academic performance
-Most important characteristics are attitude, communications skills, and previou

The functions of an interview

Two main functions for selection purposes:
1. Fill information gaps in other selection devices (e.g., regarding incomplete application blank responses)
2. Assess factors that can be measured best via face-to-face interaction (e.g., appearance, speech, poi

Response distortion in interviews

-Interviewees tend to answer questions in a way that enhances their favorability
-Applicants may be more truthful if giving their responses to a computer than to a person
>However, applicants show very negative reactions to computerized interviews
-Some a

Validity evidence for interviews

Validity for interviews ranges on average between .35 and .45

Reliability evidence for interviews

-Interrater reliability is about .70; internal consistency of interviews is .34
-This indicates that low validities in interviews most likely result from poor internal reliability

Interpersonal influences on interviews (applicant-interviewer similarity)

-When an interviewer feels that an interviewee shares his or her attitudes, ratings of competence and affect are increased
-The similarity effects are not large and can be reduced/eliminated by using a structured interview and a diverse set of interviewer

Interpersonal influences on interviews (nonverbal cues)

-Positive nonverbal cues (e.g., smiling, attentive posture, smaller interpersonal distance) produced consistently favorable ratings
-Nonverbal behaviors interact with other variables, especially gender
>Example: a man displaying direct eye contact during

Cognitive influences on interviews (contrast effects)

-An interviewer's impression of one applicant or a series of applicants often influences how they perceive later applicants
>Example: after seeing several bad applicants, an interviewer may rate an average applicant higher
-These effects are extremely per

Cognitive influences on interviews (confirmatory bias)

-Interviewers may be influenced by information provided by applicants before an interview (e.g., their application data or test scores)
-They tend to shape the interview to confirm their existing impression of an applicant

The effects of applicant characteristics on interviews evaluations

-Attractiveness is only an advantage in jobs where it is relevant
-Being obese has a small negative effect on evaluations
-Conscientiousness is related to number of interview invitations received; and extraversion and neuroticism (negative) is related to

The effects of coaching on interviews evaluations

-Individuals who participate in coaching programs that provide information on interviews and tips on successful interviewing tend to have higher interview scores than those who do not

The effects of interviewer training and experience on interviews

Training may be beneficial, but experience in general has little effect on decision making

The advantages of using structured rather than unstructured interviews

Structured interviews have several advantages over unstructured interviews:
-Structured interviews are usually based on job analysis and measure job-relevant constructs
-Validities for structured interviews ranged from .35 to .62, unstructured ranged from

Using global or administrative criteria for managers

-Tell us where a manager is on the "success" continuum, but says nothing about how he or she got there
-These are fine for decision-making and have had some success in establishing validity for predictors
-They're often filled with contamination and error

The constructs measured in leadership ability tests

-Designed to measure two constructs:
1. Providing consideration ("getting along"): managerial acts oriented toward developing mutual trust, which reflect respect for subordinates' ideas and consideration of their feelings
2. Initiating structure ("getting

Validity evidence for cognitive tests (correlations)

Managerial success has been forecast most accurately by tests of GMA and general perceptual ability (correlations range between .25 and .30)
-When corrected for criterion unreliability and range restriction, the validity of tests of GMA increased to .53 a

Issues with/criticisms of cognitive tests

-Although g seems to be the best single predictor of job performance, it is also most likely to lead to adverse impact (e.g., differential selection rates for various ethnic groups)
-The overall standardized difference (d) between Whites and African Ameri

Issues with/criticisms of cognitive tests
What causes group differences?

There are over 100 possible reasons, including:
-Test construction and validation factors (e.g., cultural bias)
-Physiological factors (e.g., prenatal and postnatal influences such as differential exposure to pollutants and iron deficiency)
-Economic and

How well personality factors predict job performance (correlations); compared with cognitive predictors

(Personality factors predicts aspects of job performance that are unrelated to the aspects predicted by intelligence and job-relevant skills (incremental prediction/validity))
-Researchers are still exploring how personality traits influence an employee's

The general meanings of each of the Big 5 factors:
(that serve as a basic taxonomy for classifying personal attributes)

1. Neuroticism�being anxious, depressed, angry, embarrassed, emotional, worried, and insecure
2. Agreeableness�being flexible, trusting, good-natured, cooperative, forgiving, softhearted, and tolerant
3. Extraversion�being sociable, gregarious, assertive,

Other personality models: HEXACO

Basically like the Big 5 plus one extra dimension, Honesty-Humility; 6 factors

Other personality models: Eysenck PQ

Extraversion, Neuroticism, and Psychoticism; 3 factors

Other personality constructs: self-efficacy

one's belief in one's own ability to complete tasks and reach goals

Other personality constructs:locus of control

one's tendency to attribute outcomes to internal or external causes

Other personality constructs: self-monitoring

one's ability to regulate their behavior to accommodate social situations

Other personality constructs: alpha

A combination of one's Agreeableness, Conscientiousness, and Emotional Stability

Criticisms of using personality tests

-Response distortion (a.k.a. faking) is practically impossible to avoid
-Criterion-related validities are not very high
-Still many arguments about which personality model is most accurate

Counterarguments to these criticism of using personality tests

-Validities are often much higher when used for specific, trait-relevant criteria
-Personality adds incremental validity above cognitive predictors
-Broader (or more narrower) traits tend to predict broader (or narrower) criteria better

Examples of projective tests

-Rorschach inkblot test
-Thematic apperception test
-Sentence completion tests
-Word association tests
-Graphology

How projective tests measure personality

-Projection refers to the process by which individuals' personality structure influences the ways in which they perceive, organize, and interpret their environment and experiences.
-Projection can best be seen and measured when an individual encounters ne

The rationale for using work samples

Work samples follow this idea:
-Current behavior is also probably a very good predictor of future behavior
The whole point of having predictors is that we don't know applicant's future criterion scores
-We use predictors in order to give us an educated gu

Validity of various work samples (leaderless group discussion)

Correlates moderately with job performance (.34) and training performance (.24)

Validity of various work samples (in-basket)

Correlates low to moderately with job performance (.24 to .34) and training performance (.18 to .36)

Validity of various work samples (business game)

Correlations are usually moderate (.25 to .30)

Situational Judgment Tests: validity

Average validity of .34; add incremental validity to the prediction of job performance above job knowledge, GMA, job experience, and the personality

Situational Judgment Tests: fairness

SJTs show less adverse impact based on ethnicity than do cognitive ability tests, but there are still race-based differences, particularly when the instructions are heavily influenced by GMA-Using a video-based SJT, which is not as reliant on GMA, is a go

Assessment centers: purposes

-By using multiple techniques, standardizing methods of making inferences from such techniques, and pooling the judgments of multiple assessors in rating each candidate's behavior, the likelihood of successfully predicting future performance is enhanced c

Assessment centers: duration/size of ACs

-Centers for first-level supervisory positions often last only one day, while middle- and higher-management centers may last two or three days
-When assessment is combined with training activities, the program may run five or six days
-Most ACs process 6

Assessment centers: validity

-ACs have high correlations with various criteria (.35 to .55) & high face validity
-The construct mainly being assessed in ACs is problem-solving, but there are many others
-ACs tend to have poor construct validity because different traits are activated

Assessment centers: fairness

-ACs have much lower racial differences in scores compared to cognitive tests
-Women tend to receive higher AC ratings than men

Various approaches to combining predictors (multiple hurdle)

-Applicants progress through stages of the selection process only if they meet cutoffs
-If they do not meet a cutoff score, they are not given any more predictors and are no longer considered for the job

Various approaches to combining predictors (multiple cutoff)
-Setting a cutoff
>No single method has been proven best
>Most popular is the Angoff method

-Some or all predictors contain certain "cutoff" scores; if an applicant scores below the cutoff, they are excluded entirely from further consideration
-This model is noncompensatory - low scores (below cutoff) on a predictor cannot be counteracted by hig

Various approaches to combining predictors (multiple cutoff-> Angoff Method)

-Expert judges rate each item in terms of the probability that a barely or minimally competent person would answer the item correctly
-The probabilities are then averaged for each item across judges to yield item cutoff scores, which are summed to yield a

Various approaches to combining predictors (regression)

-After obtaining both predictor AND criterion information, we can determine how strongly each predictor relates to the criterion
-From the strength of the predictor relationship to the criterion, we can assign weights to each predictor to be used in futur

Know the practices for:
�Reducing response distortion in application forms

REALISTIC JOB PREVIEWS (RJPS)
All organizations try to make themselves seem like a good place to work, which inflates expectations
-RJPs provide a realistic view of what it is like to work for an organization
-Should be conducted to reduce pessimistic exp

Know the practices for:
�Avoiding legal complications when using drug testing for screening

-Inform all employees and job applicants (e.g., in their contracts) of the company's policy regarding drug use and potential for testing
-Present the program in a medical and safety context (e.g., it will help to improve the health of employees and also h

Know the practices for:
�Improving interviews

1. Link interview questions tightly to job analysis results, and ensure that behaviors and skills observed in the interview are similar to those required on the job; a variety of types of questions may be used
2. Ask the same questions of each candidate;

Know the practices for:
�Improving interviews (part 2)

5. Combine ratings mechanically (e.g., by averaging or summing them) rather than subjectively
6. Provide a well-designed and properly evaluated training program to communicate this information to interviewers, along with techniques for structuring the int

Know the practices for:
�Dealing with the issue of adverse impact in cognitive tests

-Test-score banding
-Consider atypical aspects of intelligence that may be useful as predictors
-Use cognitive ability tests as a starting point rather than an ending point, meaning that an overemphasis or sole reliance on g in selecting managers and empl

Know the practices for:
�Reducing response distortion (faking) in personality tests

-Forced-choice (ipsative) methods
-Warnings of faking detection scales
-Actual faking detection scales (e.g., the Unlikely Virtues scale)
-Conditional reasoning measures (examine how respondents justify their behavior, which reveals their true disposition

Know the practices for:
�Combining data (mechanical vs judgmental)

Generally, mechanical prediction combinations are much better than judgmental
-Accuracy of prediction may depend on appropriate weighting of predictors (which is virtually impossible to judge accurately)
-Mechanical methods can continue to add predictors

YIELD RATIO EXAMPLE:
Pretend an organization is hiring 50 people to fill a new job. In the past, they hired 1 out of every 4 people they gave job offers (1:4 yield ratio); gave offers to 1 out of every 2 people interviewed (1:2 yield ratio); and interview

Going through each step, the organization would need:
�50 hires needed; one applicant accepted out of every 4 given a job offer (50 x 4) = 200 people given job offers
�One person given an offer out of every 2 people interviewed (200 x 2) = 400 people inte