In the changing audience behaviour situation, and when commercial television in Finland was moving over to a channel of its own, it was necessary to create a system, better than the mathematic models, that could forecast coming audience behaviour patterns. A particularly central issue was to make the forecasting system more accurate because, in Finland, the basis of media sales is the contact guarantee, offered to each client for each campaign. Further, the business usage of increasing the target group approaches also presented special requirements for the redesign of the forecasting system. For television companies, forecasting the coming campaigns, as accurately as possible, is economically a central issue. That is why the idea was to create a continually updated database, based on real observed data, and a forecasting system connected with it, and utilizing past audience behaviours, that was being offered to advertising agencies for both nationwide and regional television campaign planning purposes. The information system is operated as concentrated in the television company and it can be used in the agencies via their data communication networks. Using this system, the quality level of the planning services received by the advertiser in all of the offices is guaranteed, as well as the campaign planning to be carried out according to the latest observed data information. Compared with traditional formula based models for estimating reach and frequency, the described new database enables calculation of cumulative reach and frequency distribution without any mathematical formulas or modelling. As it utilizes the raw data from an individual viewing file, it automatically incorporates individual duplication of viewing and programme loyalty of different programmes in the campaign schedule. All standard and user defined breakdowns are available. Also the individual database enables automatic schedule optimisation procedures.
About a year ago, we released the Family Circle Study of Print Advertising Effectiveness. This was the first fruit of a joint effort begun two years earlier with Citicorps POS Information Services Division, which invested approximately $200,000,000 in the development of a unique scanner generated a database, and Simmons Market Research Bureau, a principal source of magazine audience measurement data in the United States. The study used Citicorp's large household-specific database to measure differences in actual purchase behavior between two groups of demographically matched households with differing levels of exposure to magazine advertising. We have since undertaken several other projects utilizing this database to enable us to better understand the way magazine advertising effects product sales. We are also using the database to help us refine our own consumer marketing programs for our magazines. Today, I am going to review the results of that initial study of advertising effectiveness, and to share with you the first results of a study of the effects of frequency on advertising sales.
Advertisers desire accountability for their media expenditures. Information Resources Inc. (IRI) findings and other studies highlight that a critical variable in explaining sales is prior brand purchase. Secondly, reaching more of the brand target tended -to generate sales increases. This paper reviews an analytic approach that attempts to integrate these two hypotheses. It does this by analyzing the potential effect of incremental brand reach and its contribution to sales. The method is to calculate reach and frequency of specific brand network TV schedules in terms of how well they reach brand users vs. non-users. The reach difference between them is Incremental Reach. Incremental reach is used to analyze how well a schedule is targeted and its estimated contribution to sales. Targeting users directly, rather than using surrogate age and sex demographics, can lead to significant increases in incremental reach and incremental sales.
The basis of a reach and frequency analysis is a count of the number of commercial spots seen by each individual in an advertising schedule. When this analysis is constructed from a people meter panel that reports on a daily basis, the obvious start point is to extract the sample of continuous reporters. This is defined to be the sample of individuals from whom a valid record of viewing was received for at least every day on which a spot in the schedule was transmitted. It is only for those individuals that we can construct a complete account of which spots they did or did not view. This was the standard approach adopted within the BARB Television Audience Measurement System in the UK prior to the launch of the new service in August 1991. However, this meant that users were forced to live with potential shortfalls in data quality and certain practical difficulties: -The continuous sample base decreases as the length of the schedule increases. A loss of 1% or 2% of the panel each day can easily compound to a loss of 10% to 20% over a four week campaign. Sampling errors would increase and there is potential for bias in the continuous sample. The demographic weighting of every continuous panel to target population profiles is not a practical option given a large number of reach and frequency analyses required on a very fast turn-around. Guest viewing could not be incorporated into the reach and frequency analysis because there was no such thing as a continuous panel of guest viewers. In fact, this would probably not be meaningful because, in the context of reach and frequency, guest viewing is really a surrogate measurement of panel members viewing in other (un-metered) households. Given the problems above, reach and frequency analyses could never be consistent with the published currency which estimated individual spot audiences from the full daily reporting samples, using a more sophisticated calculation procedure and incorporating guest viewing. The introduction of the new BARB Television Audience Measurement Panel last year created a requirement for change and an opportunity to re-visit the issues listed above.
Reach and Frequency Models have been actively utilized in the United States for the planning and buying of radio announcements since the 1968 introduction of "Radios New Math." Curves built in the late 60's allow users of the medium to predict the estimated reach of a specific schedule as well as the schedules average frequency. Radio, as a medium, has undergone significant changes since the introduction of Radios New Math. Curves which drive the models of radio reach were generated from a mid-60s Politz study of New York radio listening. At the time of the original study, AM radio dominated listenership with over 90% of the total radio listening. Today, FM radio listening achieves almost 70% of the total listening, with AM serving primarily an informational role. Fragmentation of the radio audience has occurred with the average market in the United States supporting about twice the number of radio stations as were in existence at the time of the Politz study. That fragmentation has brought micro-market segmentation with many stations serving very narrow niches. Has the way people listen to radio in the United States changed since the days of the Politz study, or are they using the medium in the same way, just focusing their choice on a more narrow selection of stations? Changes in the way people use radio could have an effect on the drivers which allow accurate estimates of schedule reach. This paper revisits the work originally done by Group W and suggests minor modifications to the two algorithms which predict schedule reach. Data from a comprehensive Birch Research study of seven day listening has been reviewed along with an Arbitron analysis of listening in 10 Arbitron markets. This review suggests that the most widely used of the two models, the daypart combination model, changes little when compared to the original work. The individual daypart model changes significantly in only one of four dayparts. It is recommended that individuals utilizing reach models for radio modify their algorithms to reflect the updates presented in this paper.
The Belgian multi-media, multi product survey called CIM (Centre d'Information sur les Medias) interviews yearly 10.000 people on their reading, listening and viewing habits.In 1986 the CIM survey has asked questions also on traffic habits in the urban areas. Inhabitants of these areas, as well as people living in non-urban areas, had to reconstitute on a map their journey of the day before. Based on this, coverage and frequency figures were published on a dozen of urban outdoor networks. The paper considers two points about the survey.First, it will criticize the fact that no question was asked on the extent to what the journeys are repeated day by day. The lack of this information makes the audience accumulation figures dubious. Secondly, it will give the results of a network which actually does not exist, i.e. the network which could be constituted with sites on the urban parts of motorways.
A committee was set up in Italy in 1981 - its members being the association of companies which run outdoor advertising sites, the main association of advertising agencies and the association of advertising users - with the purpose of promoting continuous research initiatives in the field of poster advertising. The outcome has been a survey called ICSA (continuous study on outdoor advertising), which is not a specific survey, but a system of research. The ICSA study has the following objectives: - estimation of poster audience, by calculating coverage and frequency of "passages" past sites in Italian towns end cities - assessment of the effectiveness of posters as an advertising medium. To-date the ICSA study has involved surveys in 5 Italian towns and cities (with different geographical and demographic characteristics) to estimate coverage and frequency of passages past sites, and also surveys on the advertising impact of poster campaigns, conducted in 19 towns and cities of Italy.
The paper describes how the two research companies involved set about the project, the kind of results that were obtained, the working system that has been established, and suggests probable implications for the future of outdoor advertising. Integration of OSCAR (Outdoor Site Classification and Audience Research) with the Copland formula for coverage and frequency is also discussed.
This paper describes a new technology that will potentially improve specificity in targeting advertising. After a long period of little industry attention, the effective targeting of advertising has recently been given greater priority by the research industry in the U.S.A. This renewed interest is due in part to the assumption that the use of single source data (TV viewing data, purchase data, etc., obtained from a single household over time) allows the advertiser to define media target markets directly by brand/category usage instead of demographics.
One of the advantages of a panel system of media measurement is that we can examine the exposure of a whole schedule and not just individual spots or programmes. Advantage has been taken of this by several so-called 'cover and frequency guides'. These go a step further. In addition to each schedule being analysed, it is possible to generalise across schedules. Regularity in the results is in fact high.
This paper will demonstrate that there are some people who do not need as heavy a level of frequency of exposure to television commercials as others. Viewers exposed once or twice can recognise a commercial extract - and name the brand advertised - as well as other people who saw the advertisement many more times. The levels of exposure needed relate to the total amount of commercial television to which the viewer is exposed.