This succinct and jargon-free introduction to effect sizes gives students and researchers the tools they need to interpret the practical significance of their results. Using a class-tested approach that includes numerous examples and step-by-step exercises, it introduces and explains three of the most important issues relating to the practical significance of research results: the reporting and interpretation of effect sizes (Part I), the analysis of statistical power (Part II), and the meta-analytic pooling of effect size estimates drawn from different studies (Part III). The book concludes with a handy list of recommendations for those actively engaged in or currently preparing research projects.
Paul Ellis provides a very accessible introduction to several basic topics that are vital to good research practice. In order to encourage readers who enjoyed this book to dive deeper into the literature, I feel the need to point out that the content is slightly out of date on several counts; two points that immediately come to mind are (1) the non-critical presentation (or even the outright recommendation?) of the flawed "fail-safe N" method (see Becker, 2005; Ioannidis, 2008; Sutton, 2009; Ferguson & Heene, 2012 for criticisms, or the following Cochrane handbook link for a brief summary https://handbook-5-1.cochrane.org/cha...) and (2) the lack of mention of viable methods to provide support for null-hypotheses (e.g., equivalence testing or Bayesian methods).
That should, however, not take away from the fact that the text offers an easily comprehensible treatment of the subject matter, and I would recommend it to anyone who happens to be searching for a non-technical introduction to the topics that it covers.
Ferguson, C. J., and Heene, M. (2012). A vast graveyard of undead theories: publication bias and psychological science's aversion to the null. Perspect. Psychol. Sci. 7, 555–561. doi: 10.1177/1745691612459059 Ioannidis, J. P. A. (2008). Interpretation of tests of heterogeneity and bias in meta-analysis. J. Eval. Clin. Pract. 14, 951–957. doi: 10.1111/j.1365-2753.2008.00986.x Sutton, A. J. (2009). “Publication bias,” in The Handbook of Research Synthesis and Meta-Analysis, eds H. Cooper, L. Hedges, and J. Valentine (New York, NY: Russell Sage Foundation), 435–452.
As someone who isn't great when it comes to statistics I found this book remarkably easy to understand, really useful and often actually interesting. I don't think Ive ever said that about a stats book before! I'm planning on doing a meta-analysis and this provided some good background and a lot of things to think about when considering experimental papers. Kind of throws the p values and statistical significance I've always thought were the be all and end all out the window but by the sound of it that's no bad thing....
An excellent primer or review of statistics for studies. The writer gets to the point and avoids the basics of stats, instead narrowing in on the problem of low power studies causing type II errors and navigating the reader on how to balance between their expected effect size, power, alpha, measurement error and the availability bias of published results.
A must read for anyone who took half a dozen probability/stats courses in college and are interested in statistics but never learned about statistical power, using effect sizes instead of significance tests, or meta-analysis.
The Essential Guide to Effect Sizes is a highly aproachable text covering three advanced areas in experimental statistics. The first two "effect size" and "statistical power" are going to be important in every paper you might plan to write. Meta-analysis is by all account esotoric but again the text is able to dispel the mystry on how statistics can be used in meta analysis.
Here are some issues covered. Some spoilers :-)
What's more important than statistical siginificance ? Pratical significance !
What's common to the best of journals and the worst of journals ? The editors's are biased to publish report inflated efect sizes based on under powered experiments.
How to lie with statistics (for smarties) ? Use Meta-analysis can be used to manipulate results of other reseach to reach and conclusion within the author's bias!
To wrap up this text is aimed at readers of many levels including social scientist!
This is a great read for anyone into statistical analysis. If you're well versed about effect sizes some of it will be repetitive, but keep reading because the author makes some wonderful points that are important at all levels. If you're not well versed, you will benefit from the entire book.
I didn't read the parts about meta-analysis at the end but, along with a few things I was quite aware of, I did get a shocking lesson about how very badly underpowered the great majority of statistical research is. It's dismaying but important to hear.
Very readable. If your research is critical, effect sizes and statistical power bridge the gap between merely trying to avoid sample error and a real study of an effect. Contains: thirty recommendations for researchers.