# Statistics: Unlocking the Power of Data

Genres:

Ads

## Book Preface

Helping students make sense of data will serve them well in life and in any field they might choose. Our goal in writing this book is to help students understand, appreciate, and use the power of statistics and to help instructors teach an outstanding course in statistics.

The text is designed for use in an introductory statistics course. The focus throughout is on data analysis and the primary goal is to enable students to effectively collect data, analyze data, and interpret conclusions drawn from data. The text is driven by real data and real applications. Although the only prerequisite is a minimal working knowledge of algebra, students completing the course should be able to accurately interpret statistical results and to analyze straightforward datasets.

The text is designed to give students a sense of the power of data analysis; our hope is that many students learning from this book will want to continue developing their statistical knowledge. Students who learn from this text should finish with

â€¢ A solid conceptual understanding of the key concepts of statistical inference: estimation with intervals and testing for significance.
â€¢ The ability to do straightforward data analysis, using either traditional methods or modern resampling methods.
â€¢ Experience using technology to perform a variety of different statistical procedures.
â€¢ An understanding of the importance of data collection, the ability to recognize limitations in data collection methods, and an awareness of the role that data collection plays in determining the scope of inference.
â€¢ The knowledge of which statistical methods to use in which situations and the ability to interpret the results effectively and in context.
â€¢ An awareness of the power of data analysis. Building Conceptual Understanding with Simulation Methods

This book takes a unique approach of utilizing computer simulation methods to introduce students to the key ideas of statistical inference. Methods such as bootstrap intervals and randomization tests are very intuitive to novice students and capitalize on visual learning skills students bring to the classroom. With proper use of computer support, they are accessible at very early stages of a course with little formal background. Our text introduces statistical inference through these resampling and randomization methods, not only because these methods are becoming increasingly important for statisticians in their own right but also because they are outstanding in building studentsâ€™ conceptual understanding of the key ideas.

Our text includes the more traditional methods such as t-tests, chi-square tests, etc., but only after students have developed a strong intuitive understanding of inference through randomization methods. At this point students have a conceptual understanding and appreciation for the results they can then compute using the more traditional methods. We believe that this approach helps students realize that although the formulas may take different forms for different types of data, the conceptual framework underlying most statistical methods remains the same.

Our experience has been that after using the intuitive simulation-based methods to introduce the core ideas, students understand and can move quickly through most of the traditional techniques. Sir R.A. Fisher, widely considered the father of modern statistics, said of simulation and permutation methods in 1936:

â€˜â€˜Actually, the statistician does not carry out this very simple and very tedious process, but his conclusions have no justification beyond the fact that they agree with those which could have been arrived at by this elementary method.â€™â€™

Modern technology has made these methods, too â€˜tediousâ€™ to apply in 1936, now readily accessible. As George Cobb wrote in 2007:
â€˜â€˜… despite broad acceptance and rapid growth in enrollments, the consensus curriculum is still an unwitting prisoner of history. What we teach is largely the technical machinery of numerical approximations based on the normal distribution and its many subsidiary cogs. This machinery was once necessary, because the conceptually simpler alternative based on permutations was computationally beyond our reach. Before computers statisticians had no choice. These days we have no excuse. Randomization-based inference makes a direct connection between data production and the logic of inference that deserves to be at the core of every introductory course.â€

Download Ebook Read Now File Type Upload Date
Download here
Read Now

Ads

PDF May 30, 2020

Do you like this book? Please share with your friends, let's read it !! :)

How to Read and Open File Type for PC ?