IOP 301 T Test Validity What is validity

  • Slides: 61
Download presentation
IOP 301 -T Test Validity

IOP 301 -T Test Validity

What is validity? It is the accuracy of the measure in reflecting the concept

What is validity? It is the accuracy of the measure in reflecting the concept it is supposed to measure. In simple English, the validity of a test concerns Ø Ø what the test measures and how well it measures IOP 301 -T

Types of validity Ø Ø Ø Content-description Criterion-description Construct-identification IOP 301 -T

Types of validity Ø Ø Ø Content-description Criterion-description Construct-identification IOP 301 -T

IOP 301 -T

IOP 301 -T

Content validity non-statistical in nature Ø involves determining whether the sample used for the

Content validity non-statistical in nature Ø involves determining whether the sample used for the measure is representative of the aspect to be measured Ø IOP 301 -T

Content validity It adopts a subjective approach whereby we have recourse, for e. g,

Content validity It adopts a subjective approach whereby we have recourse, for e. g, to expert opinion for the evaluation of items during the test construction phase. IOP 301 -T

Content validity Relevant for evaluating Achievement Ø Educational Ø Occupational Ø measures. IOP 301

Content validity Relevant for evaluating Achievement Ø Educational Ø Occupational Ø measures. IOP 301 -T

Content validity Basic requirement for Ø Criterion-referenced Ø Job sample measures, which are essential

Content validity Basic requirement for Ø Criterion-referenced Ø Job sample measures, which are essential for Employee selection Ø Employee classification Ø IOP 301 -T

Content validity Measures are interpreted in terms of Mastery of knowledge Ø Skills Ø

Content validity Measures are interpreted in terms of Mastery of knowledge Ø Skills Ø for a specific job. IOP 301 -T

Content validity Not appropriate for Aptitude Ø Personality Ø measures since validation has to

Content validity Not appropriate for Aptitude Ø Personality Ø measures since validation has to be made through criterionprediction procedures. IOP 301 -T

Face validity It is not validity in psychometric terms ! It just refers to

Face validity It is not validity in psychometric terms ! It just refers to what the test appears to measure and not to what it measures in fact. IOP 301 -T

Face validity It is not useless since the aim may be achieved by using

Face validity It is not useless since the aim may be achieved by using appropriate phrasing only ! In a sense, it ensures relevance to the context by employing correct expressions. IOP 301 -T

Criterion-related validity Definition A criterion variable is one with (or against) which psychological measures

Criterion-related validity Definition A criterion variable is one with (or against) which psychological measures are compared or evaluated. A criterion must be reliable ! IOP 301 -T

Criterion-related validity It is a quantitative procedure which involves calculating the correlation coefficient between

Criterion-related validity It is a quantitative procedure which involves calculating the correlation coefficient between one or more predictor variables and a criterion variable. IOP 301 -T

Criterion-related validity The validity of the measure is also determined by its ability to

Criterion-related validity The validity of the measure is also determined by its ability to predict performance on the criterion. IOP 301 -T

Criterion-related validity Ø Concurrent validity Accuracy to identify status regarding characteristics Ø the current

Criterion-related validity Ø Concurrent validity Accuracy to identify status regarding characteristics Ø the current skills and Predictive validity Accuracy to forecast future behaviour. It implicitly contains the concept of decision-making IOP 301 -T

Criterion-related validity Warning ! Sometimes a factor may affect the criterion such that it

Criterion-related validity Warning ! Sometimes a factor may affect the criterion such that it is no longer a valid measure. This is known as criterion contamination. IOP 301 -T

Criterion-related validity Warning ! For example, the rater might Ø Be too lenient Ø

Criterion-related validity Warning ! For example, the rater might Ø Be too lenient Ø Commit halo error, i. e, rely on impressions IOP 301 -T

Criterion-related validity Warning ! Therefore, we must make sure that the criterion is free

Criterion-related validity Warning ! Therefore, we must make sure that the criterion is free from bias (prejudice). Bias definitely influences the correlation coefficient. IOP 301 -T

Common criterion measures Ø Academic achievement Ø Performance in specialised training Ø Job performance

Common criterion measures Ø Academic achievement Ø Performance in specialised training Ø Job performance Ø Contrasted groups Ø Psychiatric diagnoses IOP 301 -T

Common criterion measures Academic achievement Used for validation of intelligence, multiple aptitude and personality

Common criterion measures Academic achievement Used for validation of intelligence, multiple aptitude and personality measures. Indices include Ø School, college or university grades Ø Achievement test scores Ø Special awards IOP 301 -T

Common criterion measures Performance in specialised training Used for specific aptitude measures Indices include

Common criterion measures Performance in specialised training Used for specific aptitude measures Indices include training outcomes for Ø Technical courses Ø Academic courses IOP 301 -T

Common criterion measures Job performance Used for validating intelligence, special aptitude and personality measures

Common criterion measures Job performance Used for validating intelligence, special aptitude and personality measures Indices include jobs in industry, business, armed services, government Tests describe duties performed and the ways that they are measured IOP 301 -T

Common criterion measures Contrasted groups Sometimes used for validating personality measures Relevant when distinguishing

Common criterion measures Contrasted groups Sometimes used for validating personality measures Relevant when distinguishing the nature of occupations (e. g, social and nonsocial – Public Relations Officer and Clerk) IOP 301 -T

Common criterion measures Psychiatric diagnoses Used mainly for validating personality measures Based on Ø

Common criterion measures Psychiatric diagnoses Used mainly for validating personality measures Based on Ø Prolonged observation Ø Case history IOP 301 -T

Common criterion measures Academic Performance achievement in specialised training Intelligence Aptitude Personality Job performance

Common criterion measures Academic Performance achievement in specialised training Intelligence Aptitude Personality Job performance Contrasted groups Psychiatric diagnoses IOP 301 -T

Ratings Ø Suitable for almost any type of measure Ø Very subjective in nature

Ratings Ø Suitable for almost any type of measure Ø Very subjective in nature Ø Often the only source available IOP 301 -T

Ratings These are given by teachers, lecturers, instructors, supervisors, officers, etc, … Raters may

Ratings These are given by teachers, lecturers, instructors, supervisors, officers, etc, … Raters may be trained to avoid common errors like Ø Halo error Ø Ambiguity Ø Error of central tendency Ø Leniency IOP 301 -T

Criterion-related validity We can always validate a new measure by correlating it with another

Criterion-related validity We can always validate a new measure by correlating it with another valid test (obviously, reliable as well !) IOP 301 -T

Criterion-related validity Some modern and popular criterionprediction procedures which are now widely used are

Criterion-related validity Some modern and popular criterionprediction procedures which are now widely used are Ø Validity generalisation Ø Meta-analysis Ø Cross-validation IOP 301 -T

Criterion-related validity Validity generalisation Schmidt, Hunter et al. showed that the validity of tests

Criterion-related validity Validity generalisation Schmidt, Hunter et al. showed that the validity of tests measuring verbal, numeric and reasoning aptitudes can be generalised widely across occupations (these require common cognitive skills). IOP 301 -T

Criterion-related validity Meta-analysis Ø Method of reviewing research literature Ø Statistical integration and analysis

Criterion-related validity Meta-analysis Ø Method of reviewing research literature Ø Statistical integration and analysis of previous and current findings on a topic. Ø Validation by correlation IOP 301 -T

Criterion-related validity Cross-validation Ø Refinement of initial measure Ø Application to another representative normative

Criterion-related validity Cross-validation Ø Refinement of initial measure Ø Application to another representative normative sample Ø Recalculation of validity coefficients Ø Lowering of coefficient expected after minimisation of chance differences and sampling errors (spuriousness) IOP 301 -T

Construct-identification validity Construct validity is the sensitivity of the instrument to pick up minor

Construct-identification validity Construct validity is the sensitivity of the instrument to pick up minor variations in the concept being measured. Can an instrument (questionnaire) to measure anxiety pick up different levels of anxiety or just its presence or absence? IOP 301 -T

Construct-identification validity Any data throwing light on the nature of the trait and the

Construct-identification validity Any data throwing light on the nature of the trait and the conditions affecting its development and manifestations represent appropriate evidence for this validation. Example I have designed a program to lower girls’ Math phobia. The girls who complete my program should have lower scores on the Math Phobia Measure compared to their scores before the program and compared to the scores of girls who have not completed the program. IOP 301 -T

Construct validity methods Ø Correlational validity Ø Factor analysis Ø Convergent and discriminant validity

Construct validity methods Ø Correlational validity Ø Factor analysis Ø Convergent and discriminant validity IOP 301 -T

Construct validity methods Correlational validity This involves correlating a new measure with similar previous

Construct validity methods Correlational validity This involves correlating a new measure with similar previous measures of the same name. Warning ! High correlation may indicate duplication of measures. IOP 301 -T

Construct validity methods Factor analysis (FA) It is a multivariate statistical technique which is

Construct validity methods Factor analysis (FA) It is a multivariate statistical technique which is used to group multiple variables into a few factors. In doing FA you hope to find clusters of variables that can be identified as new hypothetical factors. IOP 301 -T

Construct validity methods Convergent and discriminant validity The idea is that a test should

Construct validity methods Convergent and discriminant validity The idea is that a test should correlate highly with other similar tests and the test should correlate poorly with tests that are very dissimilar. IOP 301 -T

Construct validity methods Convergent and discriminant validity Example A newly developed test of motor

Construct validity methods Convergent and discriminant validity Example A newly developed test of motor coordination should correlate highly with other tests of motor coordination. It should also have low correlation with tests that measure attitudes. IOP 301 -T

Indices and interpretation of validity Ø Validity coefficient - Magnitude of coefficient - Factors

Indices and interpretation of validity Ø Validity coefficient - Magnitude of coefficient - Factors affecting validity Ø Coefficient of determination Ø Standard error of estimation Ø Regression analysis (prediction) IOP 301 -T

Indices and interpretation of validity Validity coefficient Definition It is a correlation coefficient between

Indices and interpretation of validity Validity coefficient Definition It is a correlation coefficient between the criterion and the predictor(s) variables. IOP 301 -T

Indices and interpretation of validity Validity coefficient Differential validity refers to differences in the

Indices and interpretation of validity Validity coefficient Differential validity refers to differences in the magnitude of the correlation coefficients for different groups of test-takers. IOP 301 -T

Indices and interpretation of validity Magnitude of validity coefficient Treated in the same way

Indices and interpretation of validity Magnitude of validity coefficient Treated in the same way as the Pearson correlation coefficient ! IOP 301 -T

Indices and interpretation of validity Factors affecting validity Ø Nature of the group Ø

Indices and interpretation of validity Factors affecting validity Ø Nature of the group Ø Sample heterogeneity Ø Criterion-predictor relationship Ø Validity-reliability proportionality Ø Criterion contamination Ø Moderator variables IOP 301 -T

Indices and interpretation of validity Factors affecting validity Nature of the group Consistency of

Indices and interpretation of validity Factors affecting validity Nature of the group Consistency of the validity coefficient for subgroups which differ in any characteristic (e. g. age, gender, educational level, etc, …) IOP 301 -T

Indices and interpretation of validity Factors affecting validity Sample heterogeneity A wider range of

Indices and interpretation of validity Factors affecting validity Sample heterogeneity A wider range of scores results in a higher validity coefficient (range restriction phenomenon) IOP 301 -T

Indices and interpretation of validity Factors affecting validity Criterion-predictor relationship There must be a

Indices and interpretation of validity Factors affecting validity Criterion-predictor relationship There must be a linear relationship between predictor and criterion. Otherwise, the Pearson correlation coefficient would be of no use! IOP 301 -T

Indices and interpretation of validity Factors affecting validity Validity-reliability proportionality Reliability has a limiting

Indices and interpretation of validity Factors affecting validity Validity-reliability proportionality Reliability has a limiting influence on validity – we simply cannot validate an unreliable measure! IOP 301 -T

Indices and interpretation of validity Factors affecting validity Criterion contamination Get rid of bias

Indices and interpretation of validity Factors affecting validity Criterion contamination Get rid of bias by measuring contaminated influences. Then correct this influence statistically by use of partial correlation. IOP 301 -T

Indices and interpretation of validity Factors affecting validity Moderator variables Variables like age, gender,

Indices and interpretation of validity Factors affecting validity Moderator variables Variables like age, gender, personality characteristics may help to predict performance for particular variables only – keep them in mind! IOP 301 -T

Indices and interpretation of validity Coefficient of determination Indicates the proportion of variance in

Indices and interpretation of validity Coefficient of determination Indicates the proportion of variance in the criterion variable explained by the predictor. E. g. If r = 0. 9, r 2 = 0. 81% of the changes in the criterion is accounted for by the predictor. IOP 301 -T

Indices and interpretation of validity Standard error of estimation (SE) Treated just like the

Indices and interpretation of validity Standard error of estimation (SE) Treated just like the standard deviation. (True and predicted values for the criterion should differ by at most 1. 96 SE at a 95% confidence level. ) IOP 301 -T

Indices and interpretation of validity Regression analysis Mainly used to predict values of the

Indices and interpretation of validity Regression analysis Mainly used to predict values of the criterion variable. If r is high, prediction is more accurate. Predicted values are obtained from the line of best fit. IOP 301 -T

Indices and interpretation of validity Regression analysis Linear regression It involves one criterion variable

Indices and interpretation of validity Regression analysis Linear regression It involves one criterion variable but may involve one (simple regression) or more than one predictor variable (multiple regression). IOP 301 -T

Indices and interpretation of validity Regression analysis IOP 301 -T

Indices and interpretation of validity Regression analysis IOP 301 -T

Indices and interpretation of validity Simple linear regression Multiple linear regression IOP 301 -T

Indices and interpretation of validity Simple linear regression Multiple linear regression IOP 301 -T

Reliability and Validity A valid test is always reliable (in order for a test

Reliability and Validity A valid test is always reliable (in order for a test to be valid, it needs to be reliable in the first place) Ø A reliable test is not always valid. Ø Validity is more important than reliability. Ø To be useful, a measuring instrument (test, scale) must be both reasonably reliable and valid. Ø Aim for validity first, and then try make the test more reliable little by little, rather than the other way around. Ø IOP 301 -T

Reliability and Validity IF Unreliable Ø Reliable, but not valid Ø Unreliable and invalid

Reliability and Validity IF Unreliable Ø Reliable, but not valid Ø Unreliable and invalid Ø Reliable and valid Ø THEN Test validity is undermined. Ø Test is not useful. Ø Ø Test is definitely NOT useful! Ø Test can be used with good results. IOP 301 -T

Optimising reliability and Validity Ø The more questions the better (the number of test

Optimising reliability and Validity Ø The more questions the better (the number of test items) Ø Ask questions several times in slightly different ways (homogeneity) Ø Get as many people as you can in your program (sample size n) Ø Get different kinds of people in your program (sample heterogeneity) Ø Linear relationship between the test and the criterion (Pearson correlation coefficient) IOP 301 -T

Selecting and creating measures Ø Define the construct(s) that you want to measure clearly

Selecting and creating measures Ø Define the construct(s) that you want to measure clearly Ø Identify existing measures, particularly those with established reliability and validity Ø Determine whether those measures will work for your purpose and identify any areas where you may need to create a new measure or add new questions Ø Create additional questions/measures Ø Identify criteria that your measure should correlate with or predict, and develop procedures for assessing those criteria IOP 301 -T