Data fusion is today becoming a more and more widely used technique : when confronted for instance with the problem of dealing with media/product type of data, it is clear that "marrying together information from different surveys" [1] is both cheaper and less demanding for the respondents to each of the two surveys. Therefore, a lot of efforts have been spent by statisticians to develop various fusion techniques, allowing to "transfer information from a donor to a recipient" in some way, these procedures being mainly adapted from "the well established set of statistical procedures for dealing with missing data" [2]. However, almost all authors start saying that "the most desirable way (...) would be to analyse a single source data base" [1], This is the reason why it is first necessary to review the real value of the statistical fusion process by itself : the criteria presently used by practitioners may be seen more as empirical evaluation rules than as TRUE statistical tests. A complete evaluation of the fusion process has to be based on a clear understanding of what exactly is expected from the "synthetic file" obtained after applying a matching procedure between two initial different files.
When trying to set-up a marketing plan for the coming twelve months, the Brand Manager is usually facing two types of problems : a) How to forecast the global market evolution in the local area under consideration ? b) Within this total market, what could be the market shares for all the brands, according to the marketing efforts done by each of them ? The first question may be correctly answered using traditional econometric models : if the market is large enough, the sum of all the brand marketing activities may be considered as reflecting the overall economic interest for this sector of business, in comparison with other economic sectors. Moreover, the larger the market, the better is the econometric forecast : these tools are designed mostly for quite large aggregates, for which the traditional econometric assumptions hold. The second question is also usually answered by econometric models : however, all the assumptions underlying these models are very often not holding so well as far as market share forecasting is concerned. The most commonly adopted solution is to forecast the brand volume instead of the brand market share, considering the product in a pure monadic way : unfortunately, it leads most of the time to quite inappropriate results, having only a few to do with the real figures observed on the market some months later. The main reason why, is that total market is driven by variables being essentially different from the variables driving brand market shares : briefly stated, the total market is driven by macro- economic variables, while the market shares are almost totally depending on marketing activities. Being mainly a marketing tool, the NIELSEN Single-Source approach has to deal with the brand market share forecasting activity.
The measurement of the "Brand Equity" is more and more requested by Manufacturers: how to evaluate at a fair price the "value" of a Brand? The present paper intends to focus on the consumer side only, the main idea being that the "value" of a Brand must be related to consumer behaviour, its financial aspect being only the result of such behaviour. Mr RA. Schmitz (Unilever USA) mentioned in a previous paper that the "Brand loyal" category of users represents the major part of the profit generated by a Brand: it means that the "value" of a Brand is in some way related to the old concept of "brand loyalty". However, brand loyalty as obtained from questionnaires administrated to consumers is mostly considered as an "attitude" rather than a "behaviour": this is not yet an operational concept, being able to provide some numerical "value" to a given Brand. The most important objective of this paper is to suggest a new way of measuring the "Brand loyalty" based on behaviour, then providing Manufacturers with a long term operational concept derived from scanning records : the main goal of the attitudinal part will then be to tell Manufacturer how this brand loyalty may be influenced by marketing actions.
Many people are talking about Single Source, most of the time without giving a precise definition of what have to be understood under this concept. The aim of this paper is an attempt to clarify the question in trying to make a distinction between all the kinds of "Single Source" mentioned here and there, and to give a more complete description of what NIELSEN is presently thinking about.
Scan 5000 is the first service tracking permanently consumer consumption and causal data related to the store where they are used to buy. In fact, only 6 % of the household are visiting occasionally large surface stores. In average, a french household is buying his grocery products in three stores. The concept of the service is to record permanently each individual purchase done by 5000 households in ten hypermarkets and supermarkets for all food and drug items. This information is associated with permanent in store observation and display such as prices/brand range/shelf space measurement/special displays, promotions... The system provide the facility to link the offer situation in the store and the consumer reaction. The pre requisite for running regular operations are : - daily detailed for information (consumer and in store), - minimum constraints for our household panelist, - representative assortment, - scanning system and EAN codes. In Europe, only testing tools have been launched using nearly similar concepts (Erim and Scannel in France IRI-GFK. in Germany). The ERIM experience combined with the Scantrack experience from Nielsen US help Nielsen France to launch Scan 5000 in February 1986. This was the first step of a more powerful service combining scanning data, media research data and consumer data.
In France, the new product sales forecasting activity has progressively left the area of measurements done in a real environment in favour of simulated test: those tests are usually cheaper, shorter and much more confidential. The main purpose of this paper is to show empirical evidences that : - Some of the underlying assumptions made a few years ago by the model-builders are to be reviewed today; - It is impossible to ignore definitely measurements done in real conditions.
Estimating sales volume potential for new innovative products is generally quite difficult. Traditional comparative measurement methods of presenting to potential consumers some competitive products along with the new unique product is obviously not appropriate. This means that share measurement for new innovative products is either inadequate or often impossible since one cannot answer the "share of what?" question properly. Even for products not totally innovative but which have the potential of expanding the category volume significantly, the usefulness of a share method is still questionable. Thus, prior to test market, the only reasonable way to evaluate a new innovation is to test it monadically with all potential consumers and obtain their purchase interest. In the following sections, a description of the test methodology and explanation of the estimating procedure to gauge sales volume potential of new innovative products with case examples will be presented.
In this paper, the authors describe a method which has been tested over many years. It offers a means for testing the sales effect of price decisions before the new price has been released on the market. Thus marketing companies can evaluate pricing decisions without revealing their intent to the competition. To date most 'laboratory' test methods rely on respondents who are solely shown the brand as a concept, this paper stresses the importance of collecting product-in-use data related to different price levels. Often it is the in-use evaluation of the brand which yields the most reliable estimate of price elasticity.
This paper shows that a standardised concept/in-home use test, if properly designed, executed and interpreted, can provide a very accurate data base for forecasting new product sales volume and its sales components. To reduce the risk of introducing a product failure into the marketplace, the application of an early and accurate New Product Sales Forecasting Model is essential for many major manufacturing companies in the U.S. and Western Europe. This paper describes, in detail, the test procedure and estimating process for a 'BASES II' test (a standardised concept/in-home use test) and presents some recent actual case validation results on Year One ; Trial Rate, Repeat Rate and Consumer Sales Volume Estimates. These are compared with Actual Test Market or National Launch Sales Volume Results. This recent data comes from France, Italy and the United States. Validation results from other European countries (United Kingdom, Germany and Holland) are also included.