Validity and Reliability of a Research Instrument Adv
Validity and Reliability of a Research Instrument Adv. Research Methods May 15, 2014 Department of RS and GISc. Institute of Space Technology, Karachi
Concept of Validity? Why necessary To establish the quality of your results Through establishing appropriateness, quality and accuracy of the procedures your adopted for finding answer to your research questions At what stage? Inaccuracies may be introduced at any stage and therefore the concept of validity can be applied to research process as a whole or to any of its steps
Two Perspectives 1. Is the research investigation providing answers to the research questions for which it was undertaken? 2. If so, is it providing these answers using appropriate methods and procedures?
Example: Validity Check Study is designed to ascertain the health needs of a community In Interview Schedule most of the questions relate to the attitude of the study population towards the health services being provided to them What is your aim? What you are finding through interview schedule? Is this instrument measuring what it is designed to measure?
Validity of Measurement Instrument In terms of measurement procedures, validity is the ability of an instrument to measure what it is designed to measure Are we measuring what we think we are measuring? (Kerlinger 1973)
Types of Validity in Quantitative Research 1. Face and Content validity 2. Concurrent and predictive validity 3. Construct validity
Content validity Extent to which a measuring instrument provides adequate coverage of the topic under study is content validity Validity based upon logical link between the questions and the objectives of the study Establishment of this link is called face validity Advantages and Disadvantages Simple Subjective – definite conclusion is hard to derive No numerical way to express it Different people may have different opinions
Concurrent and predictive validity Concurrent validity: judged by how well an instrument compares with a second assessment concurrently done Predictive validity: judged by the degree to which an instrument can forecast an outcome
Construct validity Based upon statistical procedures Determined by ascertaining the contribution of each construct to the total variance observed in a phenomenon Most complex Convergent Demonstrated by strong relationship between the scores obtained from two different methods of measuring the same construct Divergent Demonstrated by weak relationship between the scores obtained from two non-overlapping constructs
Concept of Reliability? Degree of accuracy or precision in the measurement made by a research instrument A scale or test is reliable to the extent that repeat measurement made by it under constant conditions will give the same result
Factors affecting reliability Wording of questions Physical setting Respondent’s mood Interviewer’s mood Nature of interaction Regression effect of an instrument
Validity Tests External consistency procedures Test/retest Parallel form of the same test Internal consistency procedures The split half technique
Ethics in conducting research It is important that ensure that research is not affected by the self-interest of any party not carried out in a way that harms any party
Ethical issues Ethical Behavior: “in accordance with principles of conduct that are considered correct, especially those of a given profession or group” (Collins Dictionary) Code of conduct: set of rules/guidelines outlining the responsibilities of or proper practices for an individual or organization.
Stakeholder in research? Research participants those with direct or indirect involvement those affected by the research those from whom information is collected etc. Researcher Funding agency
Issues to consider concerning research participants Collecting information Obtain respondents’ informed consent Wasting respondents’ time is unethical – is the case when you can not justify the relevance of the research you are conducting Providing incentives Seeking sensitive information Ethical to provide incentives to respondents to share information (a small gift after data collection not before!!!) Tell respondents clearly the type of information and give them sufficient time to decide if they want to share or not Possibility of causing harm to participants Harm: any discomfort, anxiety, harassment, invasion of privacy or demeaning or dehumanizing procedures Extent of harm (if not avoidable) should not be greater than ordinarily encountered in daily life Maintaining confidentiality Sharing information about a respondent with others for purpose other than research is unethical Information provided by respondents should be kept anonymous
Issues to consider relating to the researcher Avoiding bias Using inappropriate research methodology Unethical to use deliberately a method or procedure you know to be inappropriate to prove or disprove something Ex: selecting highly biased sample Using invalid instrument Drawing wrong conclusions Incorrect reporting Bias is a deliberate attempt either to hide what you have found in your study or to highlight something disproportionately to its true existence Report findings in a way that changes them to serve your own or someone else’s interest is unethical Inappropriate use of information Use of information that directly/indirectly affect respondents adversely is unethical Tell respondents of the potential use of the information (including the possibility of its being used against some of them) and let them decide if they want to participate
Issues relating the sponsoring organization Restrictions imposed by funding/sponsoring organization There may be direct/indirect controls exercised by sponsoring agency They may select methodology Prohibit the publication of ‘what was found’ Other restrictions that may stand in the way of obtaining and disseminating accurate information Both imposition and acceptance of these controls/restrictions are unethical – might be tailoring research findings to meet sponsoring agency its vested interest Misuse of information Unethical to use research for justifying management decisions when research findings do not support them
- Slides: 18