The 'long tail' of digital media creates new challenges, not only for audience measurement but also for the design of media research databases, analytical systems and reach/frequency models. Current reach/frequency and optimization models generally work from the 'bottom up' as individual media units are selected and combined one at a time. This paper describes a 'top down' solution for Out-of-home and its broader implications for increasingly fragmented media, notably digital and on-line, whose currencies are produced by the integration of diverse data sets.
This paper describes the utilization of GPS/cellular technology to record routes/distances traveled by respondents and therefore the opportunity to be exposed to outdoor advertising, providing the opportunity to build reach against various demographic target groups by outdoor advertising. In Canada, the Canadian Outdoor Measurement Bureau (COMB) is currently carrying out surveys using this technology. Planned as a multi-phase program, this first phase was intended to calibrate existing reach-frequency models. Future phases will be used to build a national database of driving patterns, and hence exposures to outdoor advertising, by geographic and demographic targets.
To date, the majority of research metrics have been largely geared towards direct marketers. Recently more tools are becoming available to allow organizations to measure the brand effectiveness of their online campaigns using relevant metrics such as brand awareness, advertising awareness, element recall and purchase intent. By arming organizations with this knowledge, research can help marketing organizations better understand the ROI of their advertising initiatives. This paper examines one example of online brand advertising measurement. In this case the direct correlation between frequency and relevant brand metrics is studied.
The author recently analyzed Nielsen//NetRatings reach, frequency and click-through data for several popular Internet domains. The purpose of the analysis was to determine how excessive frequency of exposure to banner ads impacts click-through response among popular, consumer targeted websites. This paper briefly outlines the impact that TV advertising and promotional clutter is having on the American television marketplace as a prelude to what might happen in the Internet arena. It then highlights banner wearout findings and provides strategic recommendations to help marketers improve both banner click-rates and the potential effectiveness of their online advertising campaigns.
This paper develops NBD models and new evaluation methods for estimating the reach and frequency distribution. The models were developed in order to address the radio industryâs requirements of âflightingâ and schedule variation from week to week. The models are demonstrated to be valid and reliable. They are empirically evaluated using a new four-week audience survey and are shown to be effective over 52 weeks. The evaluation is based on 1) demonstration schedules developed by the radio industry and by the analyst; 2) station reach; and 3) an examination of the statistical distributions. The delivery of the new models and their acceptance and impact on the radio industry is discussed.
Complex theories of how advertising works in terms of response, repetition and decay have been merged and confused with the audience- delivery-based media planning concepts of reach and frequency. This paper discusses distinctions that should be made in media scheduling between the strategic theories of how advertising works (response functions, decay rates) to deliver aggregate sales effects and the tactics of media buying to determine repetition and cover criteria. While the paper explodes both the myths of âeffective frequencyâ and âonce is enoughâ, it also suggests a practical, multi-dimensional framework to deal with both ideas, better linking overall brand communication strategies to media buying tactics.
The paper describes a project that has been launched in Switzerland (MUST) and is proposed to be extended internationally. A common currency for intermedia planning across national borders has yet to be established. Since reach figures need a high degree of consensus from the industry (especially from media) for comparison, a new attempt is made by instead using frequency information to form groups of heavy users within each medium. That information is already available in most syndicated media research and could be extracted at low cost. To avoid a conflict with existing currencies (reach figures and ratings), an index value is proposed in a graphical form (Mediagram).
In the last two years the amount of published diagnostic information on effective frequency has increased considerably. Most of this work, which challenges much current established thinking, has been concerned solely with television, but in itself this concentration of effort on a competitive medium is significant for publishers. A number of key studies of this type, with their conclusions, are described briefly. The recent work by Millward Brown for IPC Magazines Ltd is also concerned with frequency, but it focuses attention on the extent to which the impact of magazine campaigns can be increased by ensuring that individual creative treatments do not become over- exposed. This work is described and the conclusions drawn so far described. This Millward Brown work is breakthrough and very positive research for publishers, but it is argued that it needs to be seen as only the start of a programme of sales related experimental studies designed to explore the mechanics of print advertising campaigns. It is suggested that print has much to gain from such greater knowledge.
RAJAR, the new joint industry measurement system for UK radio, was launched in 1993. As in the past, the RAJAR survey uses a one-week self-completion diary and a model is required to estimate station or schedule reach beyond seven days. Changes in the radio market have led to changes in listening habits and a consequent need to update the extended reach model. The new model is probability based and has been validated using a one-off four week diary study. The published station reach build curves are used by several bureaux as a basis for providing the advertising industry with a practical system for schedule reach and frequency analysis. Two bureaux - IMS and Telmar - collaborated to ensure consistency in their approach to the estimation of schedule reach. However, each system offers several ways in which the base data can be used, essentially depending upon whether a schedule is being planned in broad time segments or using exact spots. These permutations are seen to generate greater inconsistencies in estimates of frequency rather than reach. Without an understanding of how each bureaux system will interpret an analysis specification, buyers and sellers of airtime could well be negotiating with inconsistent and misleading reach and frequency results.
One of the most important uses of statistical analysis is to investigate the associations or relationships between different variables. Understanding these relationships is of importance to an investigator for several reasons: It helps in the understanding of the phenomenon or phenomena under investigation. It gives insights into possible causal mechanisms between the variables. It is an important step in the construction statistical models which relate the variables to each other. ⢠These models may be used to improve the quality of predictions. The study of association in Statistics falls into two broad areas: 1. Correlation between two or more variables, and 2. Association between two or more categories in a frequency table.
The purpose of this paper is to provide a practitionerâs view of how TV reach and frequency estimates are developed in the United States of America. The paper is in four parts. The first part discusses the importance of reach and frequency as tools in the media planning process. One method to evaluate TV reach and frequency is the direct inquiry mode where the planner inputs a specific buy into the national ratings panel. This approach will not work in the United States, given the bulk buying process where most TV programs are not known in advance. The second section of the paper discusses the specific nature of a transformed linear regression approach used in the U.S. TV schedules are produced from the ratings service's panel. Regression analyses are performed against the schedules to produce reach estimates by demographic group. This approach has the virtue of being actionable and accurate. Its drawbacks are that reach levels can be skewed due to outliers. The paper then discusses how daypart reach levels can be combined on a random duplication basis. This utilizes an approach developed by Mark Maiville that adjusts the random duplication factor according to the average of the combined reach of the two schedules. This produces highly accurate estimates, although ad hoc corrections may need to be applied in any computer based application. The final section outlines how frequency distributions can be developed utilizing Metheringhamâs variation on the Beta-Binomial. The method is accurate, however, the resulting smooth curve does not mirror the peaks and troughs of actual frequency distributions. This three tiered approach to reach and frequency, while accurate, is inelegant and contains several assumptions. New research services suggest the development of more unified and sophisticated approaches.