Our industry has embraced MROCS as a new tool. Whereas most researchers still celebrate the richness this phenomenon offers, others are already pushing the "cost-efficiency" button to squeeze in as many projects as possible. Already we have seen MROCs where this scenario became reality: response wears out and members drop out faster than new members can be recruited. This way we exploit communities the same way as telephone interviews and online access panels. A new promising methodology becomes exhausted before it even gets the chance to shine. This session presents a new approach in recruitment and differentiated moderation with the 5F-model, proving how communities generate more interaction and more insights with less investment. Just by pushing other buttons we create a sustainable future for MROCS.
Online access panels have enabled cheap survey research to proliferate in the past decade. However, the results obtained from these convenience panels can be biased and unrepresentative due to the sampling methods employed. Sample matching against a smaller, but statistically representative panel has been suggested as a means to reduce sample bias and to enable a statistically representative sample to be selected. This presentation examines the extent to which bias can be reduced using this approach, and the relevant factors that must be taken into account.
Our industry has embraced MROCS as a new tool. Whereas most researchers still celebrate the richness this phenomenon offers, others are already pushing the "cost-efficiency" button to squeeze in as many projects as possible. Already we have seen MROCs where this scenario became reality: response wears out and members drop out faster than new members can be recruited. This way we exploit communities the same way as telephone interviews and online access panels. A new promising methodology becomes exhausted before it even gets the chance to shine. This session presents a new approach in recruitment and differentiated moderation with the 5F-model, proving how communities generate more interaction and more insights with less investment. Just by pushing other buttons we create a sustainable future for MROCS.
Online access panels have enabled cheap survey research to proliferate in the past decade. However, the results obtained from these convenience panels can be biased and unrepresentative due to the sampling methods employed. Sample matching against a smaller, but statistically representative panel has been suggested as a means to reduce sample bias and to enable a statistically representative sample to be selected. This presentation examines the extent to which bias can be reduced using this approach, and the relevant factors that must be taken into account.
Sampling in the online research world is its Achilles Heel. Whilst we remain reliant on access panels that they are essentially fronts for a relatively limited set of online properties and sources we can never claim scientific rigour and true representativeness. That said, there is great pressure on the panel industry to continue to provide 'warm bodies' for research at every decreasing price whilst simultaneously facing ever increasing costs. The panel model as it exists today is unsustainable for many reasons, and this paper explores what is next.
Over the past few years, online research has become an invaluable tool in the toolkit of researchers across the world. By utilising online access panels, researchers have been able to conduct research faster than ever before. They have been able to rely on the masses of consumers belonging to online access panels and have been able to reach audiences that have been increasingly difficult to reach via traditional methodologies (e.g. the younger generation of 2.0 social networkers). Online access panels in particular have given researchers the possibility of conducting truly consistent, multi-national surveys across multiple continents at the touch of a button. However, with new tools come new challenges. With international work so far-reaching and so accessible, it is important to bear in mind cultural, national and online differences when utilising such online methodologies. Knowing the potential pitfalls with fielding an international survey can be the difference between success and failure. This paper looks at the wider considerations of using online access panels internationally, plus some of the considerations for both preparing and running actual surveys. In addition, it will examine the learnings from a recently conducted survey on the Olympic Games (The Olympic Interest Survey). This survey examined the ways in which consumers plan to use the internet leading up to and during the 2008 Summer Olympic Games in Beijing.
This paper sheds a first light on the question whether or not the recruitment type of access panel members as well as duplication (i.e. respondents belonging to multiple panels) really makes a difference in terms of the quality of the research data obtained. A first major implication of this study is that we should not be too concerned about the fact that people join multiple panels as their quality is not inferior to people who have only joined one panel. A second major out-take we can derive from our results is that multi-method recruitment for building online panels is not a necessary condition for building a good quality panel.
It has long been an unsolved problem of online market research conducted by means of Access panel sample surveys, that there has been no possibility to verify whether a participant in a survey is really the same who was requested to take part in the survey by means of an e-mail message addressed to him. b2b target groups, for example it decision makers or doctors, are conceivable as examples. there has for a long time been no practical way of verifying whether they fill out an online questionnaire themselves, or forward it, for example, to a secretary or a receptionist to answer (perhaps in order to claim the incentive them-selves). however even in case of consumer research, it is important that, for example, drafts of advertisements which are intended for target audiences of women are filled out by the women panelists and not, for example, by their male partners.of course it would be possible to verify, for example, telephonically whether an invitee had also filled out the online questionnaire, but this is very seldom done in practical research. mainly on account of cost considerations, one would have to restrict oneself to random samples. in addition, there would be a media disruption between online polling and telephonic field checks. not everyone is willing to be interviewed online, because he can estimate the personal advantages of this type of polling (e.g. to choose the time of participation by himself, and not to have any contact with the interviewer), and telephonic contact for the purpose of field checking would in direct apposition. telephonic checks would be perceived thereby as a burden and an annoyance, and willingness to participate in panels and interviews would probably decline. Furthermore, in case of participant authentication via telephone, the time and cost advantages of online research would be at least partially reduced. in the process of validation one would also have to take into consideration those polled answered truthfully. the same applies to a theoretically conceivable personal field survey, in case of which a survey conducted in writing or online, the initial problem of ensuring or verifying that the intended target person fills out the online questionnaire himself would not have been solved.
A typology of panel conditioning effects and an empirical study examining the phenomenon are presented. The experiment on panel conditioning using a typical tracking study questionnaire shows that high frequency repeat interviews lead to a small degree of conditioning but low frequency repeat interviews seem to have no or very marginal effects. The study indicates the need for Panel Management Rule Books on repeat interviewing to be empirically based.
The attitudinal classification studied in this research shows a strong predictive validity in the segmented world of subscription channels. This predictive validity was assessed within the nationwide electronic audience panel in Mexico, which makes a case for the practical use of attitudinal ratings in advertising, media planning, and content management. Findings are discussed in the context of important contributions from the academic fields of psychology and mass communication.