This chapter sets out to explain the concepts behind the use of sampling in market research. It outlines some of the main options available to the survey designer and discusses the way sampling is carried out in practice. More complex topics are only introduced, with recommendations for more detailed reading.
The paper reviews the concept of sampling error applied to various methods of sampling and television panels in particular.Approaches are described for calculating sampling errors in relation to: Programme/Commercial Break Ratings Commercial Impacts/Average Hours of Viewing Channel Reach Channel Share Schedule Reach and Frequency Changes Over Time An approach to the use of sampling error is suggested given that audience measurement figures have to be used in making decisions even when the results lie within statistical confidence limits. Panel analyses which can throw light on the contribution of sampling error are also discussed. An awareness of broad levels of sampling error are essential to users and should play a part in designing research and fixing sample sizes.
Every researcher is asked the question: how big should the sample be? And every researcher has the same standard buck-passing answer: it depends on how accurately you want to measure what you measure. But given the amount of money that is traded on ratings numbers, it is important for the user to know what's real and what's statistical bounce in the surveys and therefore what the size of the sample should be to reduce this bounce to acceptable levels. The trouble is that the simple textbook formula we all know Vpq/n doesn't apply to the complex sample design and estimation procedures generally used in radio rating surveys. But techniques are available to estimate sampling errors empirically. BBM studies using such techniques show that there are two main influences on the size of sampling error and that they pull in different directions. The use of more than one respondent per household common in diary surveys tends to increase sampling error, and to increase it more, the wider the demographic. The use of average quarter-hour estimates tends to decrease the sampling error, and to decrease it more, the longer the time block being averaged. Generally, the latter effect dominates the former, meaning that the rating estimates are more reliable than the user might perhaps think. The details of sample design do matter too i. e. things like stratification, estimation procedures and particularly, the weighting scheme. We provide a case study of how attention to small technical details can pay off in increased precision just as much as an explicit increase in sample size: technique is as important as size.
This paper looks at the various sources of bias, discusses how to reduce them or, at least, how to measure them.
In this and the following paper, the phenomena associated with and the nature of "survey error" is explored.
In survey research it is very rare for all respondents in a given population to be interviewed. We usually take a sample of that population. The reason why we can do this is because a sample can give us, not necessarily the accuracy of a census (or full count), but sufficient accuracy for prediction purposes. This is true if the sample is representative of the population from which it is drawn. There are various sampling methods that can be used if we wish to obtain a representative sample. Such samples can give, depending mainly on the size of the sample, results to given levels of precision.
This paper discusses a problem that is frequently encountered in consumer market research. It is the requirement to obtain 'usable' samples of small minority market sectors where cost and time constraints exist. The author has not found the answer to this problem and certainly does not claim that the approach he illustrates by way of a case history constitutes the answer.
The purpose of this paper is to discuss within the current context of non-probability sampling, the derivation of criteria for determining efficient survey design, and within that, the effects of sampling error and bias in relation to other sources of bias and error.
Because of the qualitative nature of group discussions, they are often the subject of debate centering around two major issues. First to what extent are the results generalizable to the real world (external validity) and second what effect do certain internal phenomena such as moderator influence, treatment effect, sampling error, group interaction, etc., have on the output (internal validity). This presentation draws together from published literature a list of potential sources of bias which it is believed may interfere with internal and external validity of group discussions. These potential sources of bias were evaluated and rated (on a 5 point scale - slightly harmful to very harmful) by a panel of well known and respected group discussion moderators in Canada and the U.S.A. A consensus was reached concerning a measure of "the harmful effects" of forty-four sources of possible bias.
There has been ample documentation over the past years of the various sources of error in surveys. This paper reviews some of the more important work presented by academic, government and commercial organisations. These sources of error in commercial work carried out in the U.S. continue to be of sufficient magnitude to seriously mar the value of these studies. A generalised model of the survey process is presented which is helpful in relating the real world to the survey mechanism, the reported values and their relevance to marketing decisions.
This paper has sought to show that the sampling problems could be overcome efficiently and economically. The identification problems could also be solved if the naive idea that one was after the 'decision makers' were abandoned: a definable group of decision makers who could first be identified and then sampled, is a fantasy which ignores the reality of how commercial concerns operate. With a more realistic approach to classifying the influences which executives exercise on purchasing decisions, there are no good reasons why valid media data relating to commercial purchasing should not become available.
This paper is concerned with the quality of the data derived from the personal interview. The possible sources of error at each stage of the interview are examined. Error arising from the selection of respondents by the interviewer, and from faulty questionnaire construction are also included. Relevant research findings are discussed. Suggestions are made as to how error in the personal interview can be minimised. Finally a plea is made for more attention to be paid to the raising of fieldwork standards.