Many have asked themselves how the effectiveness of mixed media campaigns can be measured, and more specifically, how the effects of different media can be found. This paper suggests some solutions for these problems through Strategic Communication Management. This new paradigm for advertising research changes our view on what we want to know and when and how to use it for our running campaigns.
The paper concentrates on a new method for the measurement of specific issue readership. By means of an electronic version of the Through the Book method with various issues of some twenty magazines the specific issue reach and the cumulating reach has been investigated in the Dutch market. The paper treats methodological issues: how the new method can be used in print audience research and how it can be combined with the measurement of the effects of print and television.
Communication Management is the planning, implementation, control, evaluation and optimisation of marketing communication. It is often viewed by advertisers as vital for optimal results from their marketing communication efforts. This paper describes the kind of research and the tools that are needed for communication management. It shows how effects of a multimedia campaign are measured. In a case study it becomes clear in what ways print media add to the effects of a multi media campaign.
The paper describes people meter systems that are used for measuring television ratings and looks at the alternative measurement systems that are given by new media. The Internet is very much in the picture, but does not seem to be able to threaten television for many years to come. A more promising system is interactive television: television mediated by decoders or set top boxes (stb's). For the general public this new medium will be much more important than the Internet. Interactive television will also be more important for some advertising companies than passiveâ television. In this study results are given of a pilot in the Netherlands. Although no random sample was available, it seems to indicate that people like interactive programmes and commercials and that they generate very high response and conversion rates. But a warning is given as well: set top boxes that deliver only one kind of service can attract only small target groups. A multiple service set top box will have a much better chance of succeeding.
Television audience ratings are extremely important for the evaluation of television programmes, stations and commercials. It is argued that, as a result of various sample and other characteristics of the Dutch people meter panel, daily ratings of programmes may be highly unstable. The important characteristics that affect sample reliability are sample size, rating level (programme), the variables used for weighting the sample, the size of the weight factors, behavioral characteristics of the public, i.e. correlated viewing pat- terns of members within households and the way ratings are used (daily ratings, aggregated ratings or difference of ratings). Several sources of empirical data have been used: Dutch people meter ratings of programmes of one day, gross rating points aggregated per hour and per quarter and the gross rating points of two advertising campaigns. Examples are given of separate and simultaneous effects on the reliability of ratings of several target groups. It is found that sample stratification gives only a slight improvement of the reliability, because the stratification variables have a low correlation with individual viewing patterns. Correlated viewing patterns in households decrease reliability dramatically. The combined results show that daily ratings of most programmes lack the needed reliability for evaluation. This is even more the case when small target groups are considered. The summed or aggregated GRPâs of large advertising campaigns comprise many measurements and even though they are measured in a panel, they are much more reliable. Specific examples of the reliability of programme ratings and advertising campaigns are given. One with highly stable ratings, the other with less stable ratings. Differences of ratings of a panel are much more reliable than sum scores. Difference scores are not used extensively, though.