This paper discusses the relevance of brand image measurement to the current marketing role, and the importance of perfecting suitable measures and analysis procedures for different markets. It is argued that many of the conventional approaches can be improved; in particular, by concentrating more on analysis of individual data sets, together with linking imagery to preference information. Following a review of mainstream current approaches, an account is given of relatively new analytic procedures - such as the PILOT and LOCATOR models - which adopt micro-modelling principles in the treatment of classical brand image data to generate a linkage between imagery and preference. A case example is given to support this. Finally, the paper discusses some of the experience gained over the last decade, and makes some suggestions for the future.
The need exists to bring the individual back into the forefront of research, whether qualitative or quantitative. It is predicted that in the â90âs, the mean score and the group discussion will decline in prevalence & disaggregated data will rule. In the case of qualitative research therefore, the 90âs should see a greater emphasis on individual depth interviews, reflecting a closer rapprochement with psychotherapy than in recent years. When groups do come into consideration, the first step will be to consider the possible relevance of family groups, which, after all, are the primary group in society, rather than peer groups. Turning to quantitative research, the 90âs will find researchers showing increased interest in techniques that expose differences between individuals rather than those which summarise. This will manifest itself in the avoidance of summary statistics the âpersonalisingâ of the interview the application of vertical analysis the micro-modelling of responses. Overall, whether through qualitative depth interviews or quantitative micro- modelling, in the age of the individual it is this individual who should shine through in the development of market research in the 90âs.
This paper has attempted to demonstrate the benefits of integrating two quite distinct modelling processes, each based on a micro-modelling philosophy. In particular, it has shown how early quasi test-market volume predictions (such as provided by a Microtest type analysis) can be modelled at a variety of prices in addition to the core price contained within the concept proposition mix, and how valuable this can be to identify optimal pricing points. At the same time, the value of MicroTest type trial and volume data can make traditional Brand/Price Trade-Off modelling considerably more sensitive.
This paper concentrates on the Brand/Price Trade-Off (BPTO) modelling technique; how in the context of growing interest in price testing methods it evolved from earlier ad-hoc pricing work, and some of the problems that it helped to overcome. Since it was first used in the early 1970s, the method has become increasingly sophisticated, and adapted (largely by the author and his colleagues) to an expanding repertoire of market types, representing differing purchasing decisions by consumers. In addition to an account of the historical development of the approach, the paper also provides a description of the âstate of the artâ in terms of data collection and analysis. The paper continues by considering different types of purchasing decision in differing markets, and how the model is adapted for use in these. Finally, the paper concludes by reviewing the strengths and weaknesses of the method compared with others commonly used, and provides some suggestions for future development.
In this paper, following a review of the historical development of research methods for predicting volume sales and brand shares of new products, a new model (MicroTest) is described which uses information gathered in a concept/product test for volume prediction. The model makes use of brand related parameters (such as advertising and distribution), altitudinal predispositions (e.g. âexperimentalismâ), and circumstantial factors as input to the model, and these are described, together with the method of integrating these for predicting at the individual respondent level. Individual results are then accumulated across a sample of individuals, and grossed up to provide national sales estimates. The paper describes the various development stages undergone in the construction of the model, and the techniques used to assist this process. In particular, the way in which Artificial Intelligence techniques such as ârule inductionâ was used is discussed. Finally, the paper discusses the way in which the basic model may be extended, and some recent work which used the model to generate a measure of cumulative penetration.