Comparing several observed distributions without having to worry about our sample sizes: Fisher’s exact test!
“Blah blah blah Flo! You talk nice and pretty, …”
STOP! Enough! I got it!
Ok, so we have low sample sizes and/or low expected sizes. No worry, we can use the Fisher’s exact test. It will be used for the same purpose as the chi square test of independence, but is not sensitive to the size constraints previously evocated. Here how it works:
males=c(2,1,1)
females=c(1,2,2)
juveniles=c(0,0,3)
As for the chi square test, we need to have a matrix to input our contingency table:
crappy_data=rbind(males,females,juveniles)
3, 2, 1, test!
fisher.test(crappy_data)
Fisher's Exact Test for Count Data
data: crappy_data
p-value = 0.5909
alternative hypothesis: two.sided
And we have here a non-significant result.
INTRODUCTION
No, don't run away! It will be fine. Stats are cool.
ANOVA
Comparing the mean of more than two samples
CHI SQUARE TEST
*cue "Ride of the Valkyries"
STUDENT’S T-TESTS
Comparing the mean of two samples
KRUSKAL-WALLIS RANK SUM TEST
Comparing more than two samples with a non-parametric test
CORRELATION AND REGRESSION
Correlation, regression and GLM!
WILCOXON TESTS
Comparing two samples with a non-parametric test
BINOMIAL TEST
Comparing observed percentages to theoretical probabilities
CONCLUSION
After this dreadful interlude, let's make some art!