Peoplemeter panels have been the principal tool for measuring TV audience behaviour for about ten years now, yet there is no universal agreement on the right way of running such a panel. Although bodies such as EAAA and EBU have started to explore the issue of harmonisation, many key areas have not yet been fully explored, while practice on other continents continues to differ in many ways. While different metering systems differ electronically, there are now many meters all of which provide the same basic research functions of monitoring station tuned and viewership via a remote handset. But there is much disparity, some of it long-standing, in the area of panel management. Three examples are given: the use of a claimed weight of viewing control; measures of and remedies for unpressed buttons; and the use of enforced rotation. Finally, many innovative and valuable methodological research methods have been devised yet they have not always been adopted elsewhere. This paper reviews where we are and how we got here, and asks where the heavens we're going.
The BARB audience measurement system has the reputation of being the most sophisticated in the world. But because of its methodologies and its quantitative nature the advertising industry has, to a greater or lesser extent, been forced into making a number of assumptions: - the peoplemeter button is pressed so the viewer must be paying attention - someone is watching, they must want to do so - impacts have the same value irrespective of where they are placed while we may deny that we make these assumption in practice we all do. If we don't then why do we continue to put average OTS figures at the bottom of media plans. If we suspect that some programmes or some spots have a greater or lesser value, then surely the effective number of impacts would be adjusted to reflect variations. On the other hand, if these assumptions are accepted as broadly true, and that it doesn't matter what is bought, that a rating is a rating, think just for a second about the enormous blind faith which is being paid to BARB. A system which recently was reporting 16% of all viewing being accounted for by uninterrupted viewing sessions of eight hours or more! The point of this paper is not to knock BARB nor has it been written to stimulate a technical debate, but to challenge normally high levels of inquisitiveness when common sense and common practice don't gel.
In February 1992 Nielsen Media Research announced a change in the existing People Meter methodology: specialists (called People Meter Representatives, or PMRs) were to be added to the field force to provide intensified training in the use of the people meter in households with children and teenagers. Research and pragmatic considerations were successfully integrated during the development, gradual implementation and ongoing evaluation of this methodological enhancement. Our goal in malting this change was to improve the accuracy of audience estimates produced by the people meter system. Through the use of intermethod comparisons (including set meter, diary and coincidental data), we developed clear criteria of accuracy. This empirical grounding was enriched with customer input, experience-based knowledge and consultant perspectives. We then formulated hypotheses to define objective goals and to guide the development of specific procedures designed to improve accuracy. A preliminary field implementation stage was intentionally limited in its scope and included qualitative evaluation Since this early implementation was conducted in the live panel, we monitored the process continuously and were prepared to halt the roll out or even retract the change. As we moved to the full-scale roll out, we maintained continuous evaluation of effects. Although this process change did not result in measurable differences in the overall parameters of television viewing, many components of the viewing measure that were predicted to change in the direction of greater accuracy showed statistically significant improvement. By its nature, a change in the initial treatment of panel members will be implemented gradually as the panel turns over. This provides a transition paradigm Although this gradual transition cushions any potential discontinuity in the media marketplace, there are still key issues and sensitivities around the definition of "truth" and the valuation of audience during a long transition cycle. The paper concludes with a discussion of theoretical limits to file measurement of process change effects and the practical issues of providing a stable platform for the television industry while continuously improving the measurement standard.
There are currently 21 peoplemeter panels operating in 18 countries in Europe. At a European industry level, a great deal of work has been done on the subject of harmonization of television research. The new comprehensive survey from EAAA was conducted over a period of 18 months by Dr. Toby Syfret, media research advisor to the EAAA. The report is invaluable in identifying opportunities to reduce variability and increase the unity between systems. From an advertisers point of view, some of the variations that were shown in the report appear significant. At a very basic level, a multi-national advertiser needs to know if a GRP in one country is the same as a GRP in another. Concentrating on the methodological variations in the calculation and reporting of GRPs, Carat identified three areas for investigation: operational definitions, the components of a commercial rating (eg. guest viewing) and the definitions of reporting categories. In order to evaluate their effects, Carat commissioned RSMB to re-calculate the ratings for a selection of UK schedules using the respondent level data from the BARB panel. This simulation demonstrates that the conventions adopted in some countries can lead to large variations in reported commercial impacts.
With active meter panels such as Arbitrons ScanAmerica® service, what contact with panelists is appropriate? Phenomenological psychology, which emphasizes understanding experience as well as behavior, suggests principles of panelist contact. Several Arbitron studies demonstrated the benefits of ongoing, personalized contact and helped elaborate a program of panel relations. A 1984 pilot study showed that contact with a panel relations representative helped the panelists remain motivated, even when there were problems with the early prototype equipment. These results were confirmed by a larger study, a full-panel debriefing of Denver panelists in 1986. In-home debriefing interviews confirmed that panelists experienced the panel relations contact as an important benefit. These contacts received the highest altitudinal ratings of any dimension of panel experience. In 1988 an analysis of panelist-initiated backouts showed that performance-feedback contacts were associated with significantly increased panel tenure. This panel-treatment variable explained more tenure variation than household demographics among the selected group of backouts. These results, along with the phenomenological framework, contribute to current ScanAmerica contact procedures. Examples of these include a toll-free, 24-hour panelist hotline, regular performance feedback (both positive and negative), panelist suggestion programs, and panel newsletters which include an activity-oriented children's comer. This paper also reviews procedures that help minimize bias in panel relations contacts (scripting telephone contacts, data and field audits, and organizational checks and balances). We conclude by mentioning remaining methodological issues such as the need to determine optimal contact schedules, and to assess the impact of contact by demographic.
The map of Europe is deceptively simple as far as TV measurement is concerned. The same technique (a panel equipped with electronic set and people meters) is used in every country, or will be by the end of the year, and only 2 companies, AGB and GFK, control these panels in every country except France, Finland and Sweden. In practice, the meaning of a TV rating is different in every single country (and sometimes within countries), and key measures, such as viewing minutes, channel share and even audience demographics cannot be compared between countries because of the different ways in which they are calculated. This paper describes how this situation has developed, and why it is now likely to change. It details the main data needs of advertising agencies and advertisers from TV audience research, and then specifies those elements which currently have most effect on the differences between these measures in different European countries.
Changes within the UK Television market, both recent and imminent, will bring the buying and selling of TV airtime moving much closer to the negotiation of Press space. The separate selling of Channel 4 airtime, the continuing growth of satellite penetration, the fragmentation of viewing and increased emphasis on coverage as well as cost will all put greater emphasis on research. Against this background, in 1990 BMRB (the owners of TGI) embarked upon the significant commercial undertaking of fusing TGI onto BARB to create Target Group Ratings (TGR). The objective was to create an extended and enhanced TV measurement currency capable of enabling TV sales contractors and airtime buyers to assess audience delivery by brand and product users rather than just normal demographics. The paper summarises the methodology, and demonstrates just some of the strengths and value that TGR data can provide to its users. The paper also highlights some of the BARB changes, the implications the changes may have on the fusion process and assesses the early comparisons of the fused data to BARB single source data.
Audience estimates derived from data collected from meter households are provided daily and also cumulated over successive days to provide weekly, monthly, quarterly and annual audience estimates. The same estimates can be used to estimate change between various periods. Change can be expressed as either differences of estimates between periods or as ratios of the estimates for the periods. Some year-to-year changes may be viewed as unreasonable resulting in uneasiness because of concern that the change is due to changes in the process rather than changes in the population being measured.
Eleven separate research studies, covering as many marketing places, were conducted to determine which marketing communications tools are most useful in providing information about products and services specified by managers who have purchasing responsibility for their companies. Marketing communications evaluated were specialized business publications, trade shows, salespeople, conventions/seminars, direct mail, directories, daily newspapers, general business magazines, network television, radio, newsmagazines, consumer magazines and cable television. The eleven marketplaces included architecture, packaging, networking, fleet/trucking, grocery, restaurant, office supply, advertising, chemical, home improvement, and medical. Cumulatively, across the 11 segments, specialized business publications were rated first as the most useful source of information about business products, with salespeople, second. Trade shows were listed third; convention/seminars, fourth; and direct mail, fifth. At the bottom were cable television, general magazines and radio. Of the 11 marketplaces, specialized business publications were significantly ahead in seven, and salespeople in two. In brief, the values, both of targeted editorial and print advertising represented by the business publications covering the 11 different markets were established by these studies.
Up to now, the evaluation of media plans as part of an advertising strategy has only been possible at the level of the communication medium. For the medium television, supplementary data concerning the advertising medium, i.e. the spot ratings, are available. A new model of media planning has been developed for the evaluation of media plans, a model which links, in a special dataset, media exposure, average reach and advertising exposure. This model has been developed in cooperation of advertisers, agencies and clients of TV audience research.
Advanced production technologies have contributed to a high quality standard for most branded consumer goods - e.g. detergents. This generally high standard inevitably leads to a high degree of perceived similarity in blind product tests - pure physical differences of the products and their effects on relative performance are below the discriminative threshold of most consumers. In order to provide differential cues for the customer modem marketing employs a number of features such as brand personality and product aesthetics (e.g. color, form, fragrance). The less differentiated the physical properties within a product group the more important seem to be aesthetic differences - above all on the fragrance dimension. We will defend this central thesis by referring to empirical data from home use tests (HUT) in various product categories. The main methodological tool here is multiple regression analysis with total preference as the criterion and ratings on multiple performance scales as predictors. In selected cases we compare preference shares and subjectively perceived product features as correlates and/or predictors. The findings corroborate the importance of product aesthetics as determinants of individual preference with fragrance being one of the most influential factors
This lecture is about successful retailers, we can also speak of lesser successful retailers. GfK has chosen to research diverse structures in the electrical retail trade to make a difference between successful and less successful valid. The best criterium is of course "profit". Because this aspect is difficult to define objectively GfK used the term "sales-development" as an important criterium: Successful : Growth-rate in sales 1987 vs. 1986 above the average . Less successful : Growth-rate in sales 1987 vs. 1986 below the average. We managed to interview over 1000 electrical retailers with a total average growth-rate of 8%. This criterium (growth-rate) gives us the possibility to present the first figures: -39% successful retailers -61% less successful retailers. You probably expected a 50/50 split up. These first facts show us that the successful retailers score a growth-rate that was more above the average than that the less successful-growth rate was below the average. We now come to the presentation of diverse aspects that can be seen as the key-facts of becoming a successful retailer.