This paper considers the implications of new analytic advances on Internet-based surveys of non-random samples. The empirical evidence presented indicate that Internet-based forecasts in the 2000 U.S. elections were two times more accurate than telephone forecasts in common races, and that propensity score adjustment, a technique designed to minimize biases associated with non-random samples, was a main reason for the difference. As the credibility of any predictive research approach may hinge on its ability to pass this acid test, the evidence is important, with implications on research methods other than Internet-based surveys. For instance, the technique of propensity score adjustment could also be exploited by research organizations with panels measuring Internet activity, off-line purchase behavior, and TV viewing behavior. Such organizations could depend on propensity score adjustment to link attitudinal, opinion and other information to behavioral information collected through these panels without compromising data quality or substantially increasing the cost of data collection.
The âaudiometerâ*, a generic term perhaps preferable to âradiometerâ given its applications equally to TV as for radio audience measurement, offers the marketplace a major technological advance in the measurement of broadcast audiences. The Radiocontrol Watch is used in Switzerland for national radio audience measurement. In the United States Arbitron has mounted its Philadelphia 300 Panel. This paper reviews the critical issue of the validity of these new systems. Do they actually measure what they purport to measure? Do people actually wear the devices from first thing in the morning to last thing at night? How exposed are they to false negatives and false positives: failing to pick up media exposure that has taken place, crediting TV/radio viewing/listening when there was none? These are the early days, but the early evidence from Arbitron and Radiocontrol looks most promising.
Using a quasi-experimental design â in which data collection methods and recruitment techniques as influencing factors were varied â Ipsos Germany conducted a comparative investigation for Langnese/Unilever to assess the validity of online-panel surveys. Against the background of both the debate on methodological standards of online research and the popularity of online panels, empirical findings are required for an objective and thorough assessment of the possibilities of using this instrument. The results of this survey are therefore relevant for the further development of online research based not only on the panel approach.
In business and marketing, qualitative research is needed but rarely accepted. Data from focus group discussions, case histories or interviews requires qualitative analysis if the researcher is to discover new meanings in accounts or interpret discussions of issues or topics. Qualitative methods are, however, widely seen as entailing inefficient and time consuming data management, analysis processes that lack rigor, and reports that are unpersuasive. Early qualitative computing programs failed to solve these problems of time, rigor and credibility. New computing tools, however, deliver speed, efficiency, integration of qualitative and quantitative data, rigorous and sensitive analysis, and rich presentation of results.
In order to make theoretical considerations on integration of multiple research tools and its application to reality, the criteria of validity should be mentioned well as their mutual compatibility. This is a long and complicated matter and my intention to draw only a few guidelines. I would like to start from some real ideas coming from critical moments of the research works, more precisely f uncertain and contradictory results.
The study's purpose: to pinpoint problems with reliability and validity of particular survey questions. Subjectivity is thus associated with a lack of reliability and validity. Now obviously, objective scientific research has to concern itself very much with reliability and validity; but the pertinent question here is: reliability and validity with respect to which criteria? If we register a respondent as saying no, for heaven's sake to a particular question, we may want to know his or her answers to related questions and the strength of his or her feelings with respect to those; shortly, the universe or concourse for that particular respondent. Reliability and validity may then be established within the subjective experience, as communicated to the researcher, of this person; shortly: a single case analysis.
The paper is divided into four parts illustrating problems linked with verifications a Company can be faced with, within the market realities, when it decides to launch/relaunch/modify a product. This verification takes place after the research preliminary to the launch has already been carried out and the Company decides to put the product on the market. The Authors of this paper, who operate respectively in a Confectionery Company and in an Institute specialized in company problem-solving research, have perfected this research methodology, which in the past 7 years has proven its validity in almost 50 cases, according to both different Companies and markets. The main subjects presented in the paper concern: 1) the problem that originated the research, with repercussions brought about from the necessity to make rapid decisions concerning the strategies to be undertaken for a new product; 2) the methodology illustrating the two major research phases, both with the Trade and with the purchasers/consumers, and the setting up of standard indicators allowing the creation of a database both at the Company and at the Institute; 3) a case history of the Company showing in detail the various research phases and the results obtained in each of them; 4) a reading of the results taking into account the results derived from the research on the new product and their comparison with the Company's database and the Institute's standards.
Beginning in 1981 a new facility became available in France : the household audience meter coupled with the Audimat panel system. Each day the viewing figures for the previous 24 hours were made available, precise to the second. And because these measurements were taken automatically, they were bias-free. These measurements, however, only concerned the operation of televisions, without showing who was watching or listening. A system which was already in use in a number of European countries attracted the attention of the various parties concerned. This was the push-button system of audience measurement, which compliments the automatic household viewing measurements through personal declarations of viewing presence in the households concerned. Declarations, made instantaneously by telecommand, identify the individual concerned. However, the professionals were not convinced that this system would be successful in France, believing it to be too constricting and fearing that it would introduce a sampling bias or encourage irregular declarations in the audience behaviour patterns. To answer these doubts in greater depth than by simply examining the experiences of different European countries - which had in fact proved positive- two tests were carried out. Participation in push-button panels for TV audience measurement is today of an very high standard, and together with the extremely representative character of the panels themselves provides audience measurement figures that are very reliable indeed. We now shall describe the requisite methodological steps and their results, essential in France as in all other countries for a fully reliable system.
This paper describes the results of a methodological test conducted by NOW Research, on behalf of British Telecom - PRESTEL. The purpose of the research was to assess the validity and efficiency of collecting market research data through the medium of a television screen using PRESTEL, British Telecom's public viewdata service, compared with central location telephone interviewing.
Ever longer time-series trends in survey research are increasingly opening up opportunities to determine covariance and formulate hypo- theses about causality through analysis of the connections with other data from company statistics, from media content analyses of the print and electronic media--both in the advertising and editorial pages--or from official statistics. The validity of such hypotheses increases with the size of the sample and the length of the time period within which they are confirmed, thus providing increasingly firm ground for economic and political decisions. Research is faced with new challenges to develop theories which can clarify unanticipated connections.