UNIT II Research Design Research Design It is








































- Slides: 40
UNIT II Research Design
Research Design �It is simply a plan for a study �It can be called a blueprint for a study �It is like a plan made by an architect to build the house �This is used as a guide in collecting and analyzing the data.
Features of a good Research design �A good design is often characterized by flexibility, appropriate, efficient and economical �Generally, the design which minimizes bias and maximizes the reliability of the data collected analyzed is considered as a good design �The design which gives the smallest experimental error is supposed to be the best design in many investigations. �A design yields maximum information and provides solution for many research problem
Concepts of research design �Dependent and independent variables �Extraneous variables �Control �Confounded relationship �Research hypothesis �Experimental and non experimental hypothesis.
Types of research design �Exploratory, descriptive and casual research are some of the major types. �Exploratory research – it is used to seek insights into general nature of the problem �No previous knowledge is required and this research are more flexible, qualitative and unstructured �The researcher in this method does not know “what he will find”.
Examples for exploratory �Sales are down because our prices are too high �Our dealers are not doing a good job �Our advertisement is weak and so on �In this example, very little information is available to point out what is the actual cause of the problem �Thus, the major purpose of this research is to identify the problem more specifically �It is used in the initial stages of research
When exploratory is ideal? �To gain an insight into the problem �To generate new product ideas �To pre-test a draft questionnaire �It is appropriate to any problem about which very little information is known �This research is the foundation for any future study.
Characteristics of exploratory �It is more flexible and very versatile �Experimentation is not a requirement �Cost incurred to conduct study is low �For data collection structured forms are not used
Exploratory research methods �Four methods of this research: �Literature search- trade journals, professional journals, statistical publications etc �Experience survey- desirable to talk to persons who are well informed in the area of being investigated �These people may be company executives or persons outside the organization �No questionnaire is required
Focus group & case studies �In the focus group, a small number of individuals are brought together to study and talk about some topic of interest �The discussion is coordinated by a moderator �The group is usually of 8 -12 persons Case studies- analyzing a selected case sometimes gives an insight into the problem Case studies are well suited for exploratory research The result of investigation of case histories are always considered suggestive rather than conclusive.
Descriptive research �The name itself reveals that it is essentially a research used to describe something �It can describe the characteristics of a group such as customers, organizations, markets etc �What this research cannot indicate is that it cannot establish a Cause and effect relationship �This is the distinct disadvantage of descriptive research.
Types of descriptive studies �Longitudinal study- these are the studies in which an event or occurrence is measured again and again over a period of time �This is known as ‘Time series study’ �Through this study the researcher comes to know how the market changes overtime. �Cross-sectional study- divided into two types: �Field study- includes an in depth study �Test marketing is an example for field study
Contd…. . �Field survey- large samples are a feature of the study �The biggest limitations of this survey are cost and time �It requires good knowledge like constructing a questionnaire, sampling techniques etc
Causal research �Descriptive research will suggest a relationship but will not establish the cause and effect relationship �Example – the data collected may show that the number of people who own a car and their incomes have risen over a period of time �Despite this one cannot say that the increase in number of cars is due to rise in people’s income. �Perhaps improved road conditions or increase in the number of banks offering car loans have caused an increase in the ownership of cars.
Measurement �We use some yardstick to determine weight, height or some other physical object �We thus measure physical objects as well as abstract concepts. �But measurement is relatively complex when it concerns qualitative or abstract phenomena �Thus by measurement we mean the process of assigning numbers to objects or observations.
Contd…………. . �Measuring things such as social conformity, intelligence, marital adjustment requires much closer attention than measuring physical weight, biological age or a person’s financial assets. �In other words it is not easy to measure properties like motivation to succeed, ability to stand stress and so on. �A researcher has to be quite alert about this aspect while measuring properties of objects or of abstract concepts.
Sources of error in measurement �Measurement should be precise and unambiguous in an ideal research study �As such the researcher must be aware about the sources of error in measurement. The following are the possible sources of error in measurement �Respondent- transient factors like fatigue, boredom, anxiety etc may limit the ability of the respondent accurately and fully
Error in measurement �Situation- situational factors may also come in the way. �Any condition which places a strain can have serious effects �Measurer – errors may creep in because of incorrect coding, faulty tabulation or statistical calculations, particularly in the data –analysis stage �Instrument- errors may arise because of defective measuring instrument �The use of complex words, poor printing, inadequate space for replies, response choice omissions etc make measuring instrument defective which results in errors.
How to overcome errors? �Researcher must know that correct measurement depends on successfully meeting all of the problems listed above �He must, to the extent possible try to eliminate, neutralize or otherwise deal with all possible sources of error so that the final results may not be contaminated.
Tests of sound measurement �Sound measurement must meet the tests of validity, reliability and practicality �These three considerations should be used in evaluating a measurement tool �Tests of validity- it indicates the degree to which an instrument measures what it is supposed to measure. �Three types of validity: �content validity- is the extent to which a measuring instrument provides adequate coverage of the topic under study
Contd…. . �It can also be determined by using a panel of persons who shall judge how well the measuring instrument meets the standard, but there is no numerical way to express it �Criterion- related validity- relates to the ability to predict some outcome or estimate the existence of some current condition. This criterion must possess the following qualities: �Relevance, freedom from bias, reliability and availability.
Construct validity �It is the most complex and abstract �A measure is said to possess construct validity, we associate a set of other propositions with the results received from using our measurement instrument �If measurements on our devised scale correlate in a predicted way with other propositions, we can conclude that there is some construct validity �Finally if the above said criteria and tests are met with, we may say that our measuring instrument is valid and will result in correct measurement.
Tests of reliability �It is another important tests of sound measurement �A measuring instrument is reliable if it provides consistent results. �Reliability is not as valuable as validity but it is easier to assess reliability in comparison to validity �Two aspects of reliability i. e. stability and equivalence deserves special mention �Stability- securing consistent results with repeated measurements with the same instrument �Equivalence-considers how much error may get introduced by different samples of the items being studied.
Tests of practicality �The measuring instrument ought to be practical �It should be economical, convenient and interpretable �Economical deals with the trade-off between ideal research project and the budget which can be afford �Convenience suggests that the measuring instrument should be easy to administer.
Technique of measurement tools �It involves four stages �Concept development �Specification of concept dimensions �Selection of indicators �Formation of index.
Concept development �Researcher should arrive at an understanding of major concepts pertaining to his study �This is more apparent in theoretical studies than in pragmatic research
Dimensions of concepts �Requires the researcher to specify the dimensions of the concepts that he developed in the first stage �For instance, one may think of several dimensions such as product reputation, customer treatment, corporate leadership, sense of social responsibility and so forth when one is thinking about the image of a certain company
Selection of indicators �Indicators are specific questions, scales or other devices by which respondent’s knowledge, opinion, expectation etc are measured �The use of more than one indicator gives stability to the scores and it also improves their validity
Formation of index �The last step is combining various indicators into an index �We may need to combine them into single index �It can be done by giving scale values to the responses and then sum up the corresponding scores. �Such an overall index would provide a better measurement tool than a single indicator.
Scaling �We should study some procedures which may enable us to measure abstract concepts more accurately and this brings us to the study of scaling techniques. �Scaling describes the procedures of assigning numbers to various degrees of opinion, attitude and other concepts �This can be done in two ways: � 1) making a judgment about some characteristics of an individual and then placing him directly on a scale � 2) constructing questionnaires in such a way that the score of individual’s responses assigns him a place on a scale.
Contd…. . �A scale is a continuum, consisting of the highest point and lowest point along with intermediate points between these two extreme points �These scale points are related to each other where the first point happens to be the highest point, the second indicates higher degree compared to third �Third point indicates higher degree compared to fourth and so on……. .
Scale construction techniques �Following are the five main techniques by which scales can be developed. �Arbitrary approach- approach where scale is developed on adhoc basis �Most widely used approach and scales measure the concept for which they have been designed �First the researcher collects few statements which he believes appropriate for a topic and then people are asked to check in. �It can be developed very easily, quickly and with relatively less expense. �But we do not have objective evidence �Consensus approach or differential scales or Thurston-type scales – a panel of judges evaluate the items in terms of whether they are relevant to the topic area
Scale construction techniques �Item analysis approach or summated scales or Likertype scales –likert scales are developed by evaluating those persons whose scores are high and those whose scores are low �Here the respondent is asked to respond to each of the statements in terms of several degrees usually five (but at times 3 or 7 may also be used) �Advantages- relatively easy to construct as it can be performed without panel of judges �Disadvantages- we can only examine whether the respondents are favorable or unfavorable but how much more or less cannot be predicted.
Guttmann's scalogram analysis or cumulative scales �It also contains series of statements to which a respondent agrees or disagrees �The special feature is that the statements are in cumulative series (i. e) related to one another �Scalogram analysis refers to the procedure for determining whether a set of items forms a uni dimensional scale
Example �Let's say you came up with the following statements: �I believe that this country should allow more immigrants in. �I would be comfortable if a new immigrant moved next door to me. �I would be comfortable with new immigrants moving into my community. �It would be fine with me if new immigrants moved onto my block.
Rate the items �Next, we would want to have a group of judges rate the statements or items in terms of how favorable they are to the concept of immigration. �They would give a Yes if the item was favorable toward immigration and a No if it is not. �Notice that we are not asking the judges whether they personally agree with the statement. �Instead, we're asking them to make a judgment about how the statement is related to the construct of interest.
Administering the Scale �Once you've selected the final scale items, it's relatively simple to administer the scale. �You simply present the items and ask the respondent to check items with which they agree. �Each scale item has a scale value associated with it (obtained from the scalogram analysis). To compute a respondent's scale score we simply sum the scale values of every item they agree with. In our example, their final value should be an indication of their attitude towards immigration.
Four popular scales Ø Four popular scales in business research are: � Nominal scales � Ordinal scales � Interval scales � Ratio scales
Measurement and Scaling (4) Ø A nominal scale is the simplest of the four scale types and in which the numbers or letters assigned to objects serve as labels for identification or classification Ø Example: § Males = 1, Females = 2 § Sales Zone A = Islamabad, Sales Zone B = Rawalpindi § Drink A = Pepsi Cola, Drink B = 7 -Up, Drink C = Miranda 39
Measurement and Scaling (5) Ø An ordinal scale is one that arranges objects or alternatives according to their magnitude Ø Examples: § Career Opportunities = Moderate, Good, Excellent § Investment Climate = Bad, inadequate, fair, good, very good § Merit = A grade, B grade, C grade, D grade A problem with ordinal scales is that the difference between categories on the scale is hard to quantify, I, e. , excellent is better than good but how much is excellent better? 40