Using Statistics in Research Psych 231 Research Methods

  • Slides: 25
Download presentation
Using Statistics in Research Psych 231: Research Methods in Psychology

Using Statistics in Research Psych 231: Research Methods in Psychology

Announcements n Final drafts of class project due in labs this week – Remember

Announcements n Final drafts of class project due in labs this week – Remember to turn in the rough draft with the final draft – Also turn in the checklist from the PIP packet

“Generic” statistical test n Tests the question: – Are there differences between groups due

“Generic” statistical test n Tests the question: – Are there differences between groups due to a treatment? Two possibilities in the “real world” H 0: is true (no treatment effect) One population XA XB Two samples

“Generic” statistical test n Tests the question: – Are there differences between groups due

“Generic” statistical test n Tests the question: – Are there differences between groups due to a treatment? Two possibilities in the “real world” H 0: is true (no treatment effect) H 0: is false (is a treatment effect) Two populations XA XB Two samples

“Generic” statistical test XA n XB Why might the samples be different? (What is

“Generic” statistical test XA n XB Why might the samples be different? (What is the source of the variability between groups)? – ER: Random sampling error – ID: Individual differences (if between subjects factor) – TR: The effect of a treatment

“Generic” statistical test XA n XB The generic test statistic - is a ratio

“Generic” statistical test XA n XB The generic test statistic - is a ratio of sources of variability – ER: Random sampling error – ID: Individual differences – TR: The effect of a treatment TR + ID + ER Observed difference Computed = = Difference from chance ID + ER test statistic

“Generic” statistical test n The generic test statistic distribution – To reject the H

“Generic” statistical test n The generic test statistic distribution – To reject the H 0, you want a computed test statistics that is large • This large difference, reflects a large Treatment Effect (TR) – What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic -level determines where these boundaries go

“Generic” statistical test n The generic test statistic distribution – To reject the H

“Generic” statistical test n The generic test statistic distribution – To reject the H 0, you want a computed test statistics that is large • This large difference, reflects a large Treatment Effect (TR) – What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H 0 Fail to reject H 0

“Generic” statistical test n Things that affect the computed test statistic – Size of

“Generic” statistical test n Things that affect the computed test statistic – Size of the treatment effect • The bigger the effect, the bigger the computed test statistic – Difference expected by chance (sample error) • Sample size • Variability in the population

Effect of sample size on Sampling error Population mean Population Distribution x n=1 Sampling

Effect of sample size on Sampling error Population mean Population Distribution x n=1 Sampling error (Pop mean - sample mean)

Effect of sample size on Sampling error Population mean Population Distribution Sample mean x

Effect of sample size on Sampling error Population mean Population Distribution Sample mean x n=2 x Sampling error (Pop mean - sample mean)

Effect of sample size on Sampling error Population mean Population Sample mean Distribution x

Effect of sample size on Sampling error Population mean Population Sample mean Distribution x x n = 10 x x x xx Sampling error (Pop mean - sample mean)

Effect of sample size on Sampling error Population Distribution Population mean n = 10

Effect of sample size on Sampling error Population Distribution Population mean n = 10 n=2 n=1 § Generally, as the sample size increases, the sampling error decreases

Effect of sample size on Sampling error n Typically the narrower the population distribution,

Effect of sample size on Sampling error n Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability

Some inferential statistical tests n 1 factor with two groups – T-tests • Between

Some inferential statistical tests n 1 factor with two groups – T-tests • Between groups: 2 -independent samples • Within groups: Repeated measures samples (matched, related) n 1 factor with more than two groups – Analysis of Variance (ANOVA) (either between groups or repeated measures) n Multi-factorial – Factorial ANOVA

T-test n Design – 2 separate experimental conditions – Degrees of freedom • Based

T-test n Design – 2 separate experimental conditions – Degrees of freedom • Based on the size of the sample and the kind of t-test n Formula: Observed difference X 1 - X 2 T= Diff by chance Computation differs for between and within t-tests Based on sample error

T-test n Reporting your results – The observed difference between conditions – Kind of

T-test n Reporting your results – The observed difference between conditions – Kind of t-test – Computed T-statistic – Degrees of freedom for the test – The “p-value” of the test n “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5. 67, p < 0. 05. ” n “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant, t(12) = 5. 67, p < 0. 05. ”

Analysis of Variance XA n Designs XB XC – More than two groups •

Analysis of Variance XA n Designs XB XC – More than two groups • 1 Factor ANOVA, Factorial ANOVA • Both Within and Between Groups Factors Test statistic is an F-ratio n Degrees of freedom n – Several to keep track of – The number of them depends on the design

Analysis of Variance XA n XB XC More than two groups – Now we

Analysis of Variance XA n XB XC More than two groups – Now we can’t just compute a simple difference score since there are more than one difference – So we use variance instead of simply the difference • Variance is essentially an average difference F-ratio = Observed variance Variance from chance

1 factor ANOVA XA n XB XC 1 Factor, with more than two levels

1 factor ANOVA XA n XB XC 1 Factor, with more than two levels – Now we can’t just compute a simple difference score since there are more than one difference • A - B, B - C, & A - C

1 factor ANOVA Null hypothesis: XA XB H 0: all the groups are equal

1 factor ANOVA Null hypothesis: XA XB H 0: all the groups are equal XA = X B = X C Alternative hypotheses HA: not all the groups are equal XA ≠ X B ≠ X C XA = X B ≠ X C XC The ANOVA tests this one!! Do further tests to pick between these XA ≠ X B = X C XA = X C ≠ X B

1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule

1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses XA ≠ X B ≠ X C Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C XA = X B ≠ X C XA ≠ X B = X C XA = X C ≠ X B

1 factor ANOVA n Reporting your results – The observed differences – Kind of

1 factor ANOVA n Reporting your results – The observed differences – Kind of test – Computed F-ratio – Degrees of freedom for the test – The “p-value” of the test – Any post-hoc or planned comparison results n “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1 -way ANOVA was conducted and the results yielded a significant difference, F(2, 25) = 5. 67, p < 0. 05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5. 67, p < 0. 05 & t(1) = 6. 02, p <0. 05). Groups B and C did not differ significantly from one another”

Factorial ANOVAs We covered much of this in our experimental design lecture n More

Factorial ANOVAs We covered much of this in our experimental design lecture n More than one factor n – Factors may be within or between – Overall design may be entirely within, entirely between, or mixed n Many F-ratios may be computed – An F-ratio is computed to test the main effect of each factor – An F-ratio is computed to test each of the potential interactions between the factors

Factorial ANOVA n Reporting your results – The observed differences • Because there may

Factorial ANOVA n Reporting your results – The observed differences • Because there may be a lot of these, may present them in a table instead of directly in the text – Kind of design • e. g. “ 2 x 2 completely between factorial design” – Computed F-ratios • May see separate paragraphs for each factor, and for interactions – Degrees of freedom for the test • Each F-ratio will have its own set of df’s – The “p-value” of the test • May want to just say “all tests were tested with an alpha level of 0. 05” – Any post-hoc or planned comparison results • Typically only theoretically interesting comparisons are presented