Sample size

Date of publication: July 1, 1995

Author: Ken Purdye

Abstract:

Every researcher is asked the question: how big should the sample be? And every researcher has the same standard buck-passing answer: it depends on how accurately you want to measure what you measure. But given the amount of money that is traded on ratings numbers, it is important for the user to know what's real and what's statistical bounce in the surveys and therefore what the size of the sample should be to reduce this bounce to acceptable levels. The trouble is that the simple textbook formula we all know Vpq/n doesn't apply to the complex sample design and estimation procedures generally used in radio rating surveys. But techniques are available to estimate sampling errors empirically. BBM studies using such techniques show that there are two main influences on the size of sampling error and that they pull in different directions. The use of more than one respondent per household common in diary surveys tends to increase sampling error, and to increase it more, the wider the demographic. The use of average quarter-hour estimates tends to decrease the sampling error, and to decrease it more, the longer the time block being averaged. Generally, the latter effect dominates the former, meaning that the rating estimates are more reliable than the user might perhaps think. The details of sample design do matter too i. e. things like stratification, estimation procedures and particularly, the weighting scheme. We provide a case study of how attention to small technical details can pay off in increased precision just as much as an explicit increase in sample size: technique is as important as size.

Ken Purdye

Author

This is a long description of some author details.

This could also be of interest:

Research Papers

Research Papers

Research Reports

  • PDF