Our contribution on the subject of interviewer variability is made in the spirit of the opening quotation made and endorsed by two leading authorities in the field: conscious that we can add only a small part to the fabric of knowledge, but conscious, too, that every contribution is of value which throws light on the crucial interface between interviewer and respondent. In fact, a review of the literature reveals interviewer variability to be a more shadowy, more elusive subject than one would expect, with the very existence of the phenomenon open to doubt.
The quantitive and qualitative productivity of interviews depends upon many factors. To these belong also the interviewers themselves, who under certain circumstances can exercise a considerable influence on the interview situation and on the outcome of the enquiry. The tendency is, in the case of standardised interviews, for the chance or the danger of such influence by the interviewer to be minimised, because in such a case there is no possibility of variation in the putting of the questions and their order. The many problems that nevertheless remain, and the additional difficulties that arise as a result of standardisation are thoroughly discussed on a theoretical basis in the literature. Unfortunately, however, there is a lack, both by number and extent, of practical investigations of this group of questions (or of publication of the results thereof!) that would make it possible to verify, to improve and to evaluate these theoretical questions. In this connection it was interesting to link up with two larger surveys which were carried out under the author's leadership within the framework of the seminar for market and consumer research at the Friedrich-Alexander University at Erlangen-Nuremberg, a detailed questioning of the interviewer (about his experience when interviewing people in Nuremberg - an enquiry into enquiries, in fact), and to evaluate the results .
Our own experiment found no greater tendency for self-completion to yield critical answers: in this respect the two methods of data collection produced results which were very similar indeed. A likely explanation by which this finding can be reconciled with Scott's is the fact that our use of interviewers to recruit informants had the same inhibiting effects as using them to conduct an interview. Informants filling the questionnaire consciously or unconsciously envisaged their replies being read by the interviewer who had given them the questionnaire. Indeed, the similarity between the responses to the two methods of questioning was more remarkable than any differences, and likely to be of more practical significance. The rest of this section goes on to explore the meaning and implications of these divergencies, and to suggest ways of narrowing or widening the gap as appropriate.
In this article we will investigate the interviewer's influence on the quality of research results a little further, ignoring many interesting aspects of the interview as a research technique and restricting ourselves intentionally, to the interviewer. In this article we will attempt to show, mainly on the basis of American literature, to what extent interview errors are due to factors other than insufficient understanding of the questions and to carelessness in writing down the answers.
There has been some misplaced mistrust of surveys of attitudes on the grounds that responses to attitude questions show more bias than responses on matters of fact (such as product usage and demographics). It has been suggested that one such source of bias could be interviewers consciously or unconsciously projecting their own attitudes to the topics being discussed onto their respondents. This paper examines this hypothesis in the light of the total context of interviewer bias. It examines a specific survey carried out in Britain in which 100 interviewers conducted 1,730 interviews using a lengthy questionnaire which covered a variety of usage, attitude and demographic data relating to the confectionery market. As a matter of routine quantity control checking, selected results are analysed by interviewer; and as an addition in this case, each interviewer completed the full questionnaire at the briefing session, ostensibly as a training exercise. Half of the interviews were self-completed and the rest were face-to-face interviews. These interviewer questionnaires have now been analysed and compared with the results obtained by the actual survey sample. Additionally each interviewer's results have been related to those of the people they interviewed and the variations and relationships studied in relation to a number of interviewer variables such as age, experience, personality, geographic area, and also to the interviewer's own attitudes to the subject of the survey.
The paper deals with two broad areas of bias in the preconceptions of the respondent and the preconceptions of the researcher. In relation to the former the specific areas of bias dealt with are: A. the motivation of the respondent; B. the presentation of product samples; C. the words used to name product qualities. In relation to the latter preconceptions the specific areas of bias dealt with are: A. relationships selected for interpretation by the researcher; B. the questionnaire design. The basic principles in each of these specific areas are discussed and an attempt is made to illustrate how the principles may be translated into operational terms to eliminate these sources of bias, for example: A. Three call method of comparative testing; i. e. placing only one product at a time for a consumer's evaluation; B. Prescriptive scales; i. e. the respondent indicates how he would like the product to be rather than describing the way he thinks it is; C. Hierarchical statistical analyses; i. e. analyses which are structured in a manner corresponding to the hypotheses and questionnaire.