The purpose of this paper is to show how the Internet can and has been used commercially to conduct research. It also aims to provide guidelines for future studies.
We start our review with a taxonomy of different types of computer-assisted interviewing and a discussion of data quality. Next we present a model of the factors that may lead to differences in data quality between computer assisted and traditional interview procedures. Subsequently we give an over- view of the results of empirical research on data quality differences. Finally we discuss the consequences of our findings for social and market research.
This paper however focuses on ways that CAPI (Computer Assisted Personal Interviewing) when linked with other technological developments can bring the same benefits to our operational departments. It describes how we can gain increased efficiency and process control and proposes a means of en- capsulating interviewer quality data and using it in a practical way. Whilst the examples given are based upon CAPI with a few changes to the specifics described below they can also be applied to CATI (Computer Assisted Telephone Interviewing).
We are increasingly aware of the need for total quality management approaches in the more process-oriented aspects of market research such as data collection and tabulation. However, the way in which we communicate study findings to clients and the general public is perhaps the most critical part of what we do. Over the last few years there have been significant advances in computer graphics which have given the researcher enormous powers of data presentation and the audiences greater expectations in a world that is moving away from the written word and numbers to graphic and symbolic representations of information. This paper maintains that researchers may not yet be exploiting the new media to its fullest by observing some of the fundamentals of design that are most important to clearly conveying findings. Inspired by the great variance in presentation styles and quality, a market researcher and a graphics designer examine the environment for graphics communication in market research. The paper covers the basic theories and practice of visual graphics; the tools available for the production of quality communications; the uses and abuses of graphics as they apply to the researcher and the future role that graphics can play in the research industry. It concludes that while there are no rules for good or bad graphics both can be recognized, but only the good is ultimately remembered. Researchers must appreciate the growing need to develop their ultimate end product for marketers and not for other researchers. By integrating quality graphic communications into the TQM process the industry will improve its ability to communicate and therefore the perceived value of research.
Introduction to the ESOMAR monograph vol.6: "Market Research And Information Technology: Application And Innovation".
This paper will discuss the technological evolution undergone by the ACNielsen household panel based on the integration of different tracking tools with the final objective of gaining a new insight to consumers' behaviour an insight that may combine information based on the observation of consumer behaviour with information based on the evolution of attitudes and opinions. First we will describe the evolution of the household panel its structure and the main implementation problems encountered in evolving from a traditional panel to a technology-intensive one in terms of recruitment panel self-selection and acceptance of the collection mode within recruited households. We will then focus on the advantages of the use of computer aided self- interviewing (CASI) in the panel briefly comparing this method to other existing survey methods.
This paper is mostly about the future of face-to-face interviewing. My thesis is that we will see two revolutions in the methodology of data collection in this area before the end of the century which I shall call the 'CAPI' and the 'HAPPI' revolutions (the latter would cover telephone interviewing as well). I would like to describe how each of these is likely to come about and then discuss some of the implications for the structure and conduct of research. Whilst I will be basing most of my argument on the situation in the UK there is no reason why most of what I have to say should not apply in much of the rest of Europe and possibly beyond.
This paper describes progress to date on a Government supported programme to evaluate the potential applications of Automatic Speech Recognition (ASR) to market and survey research. The paper reviews the growth of telephone research in Europe in recent years and goes on to discuss how technology might be used to reduce costs with- out reducing quality. ASR is briefly described and its potential for self-completion discussed. The paper progresses to describing the manner in which an original large scale and representative English speech database has been built up for use in sub- word modelling. The body of the paper discusses the new and challenging issues of conducting "interviewerless" telephone interviews. The ways in which software is re- quired to replace the implicit skills of interviewers are outlined and examples given of how these have been tested in the field by replicating an existing continuous customer quality service.
The differences between the requirements for CATI CAPI CASI and now WWW interviewing are much smaller than the shared requirements. We infer that the principal investment for both software developers and users is the mechanism for presenting an interview the "CAI engine" and the challenge for developers is to reuse this engine to deliver each variant of CAI. We will now consider the various applications of Internet technology to CAI keeping in mind that we wish to maintain as much compatibility as possible across the various CAI modes.
We hypothesized that differences between input techniques depend on whether an interviewer asks the questions and handles the computer or the interview is conducted as a self-administered interview. We therefore used this interview mode as a second factor and tested if there was a difference in the number of answers between self-administered interviews and interviews conducted by interviewers. We used interviewers who had high typing skills because we thought that the difference between paper-and-pencil interviews and keyboard interviews would disappear when people keying in the data have high typing skills