Quest has received significant interest from the greater MR community and have continued our investigation ? come see what we?ve found most recently as we dig further into what happens with ?longer? surveys!Key Takeaways:Increasing legnth of interview can have measurable effects for survey results, decreasing engagement and reliability in specific ways.Understand how Quest?s first and second phases of research lead to some convincing arguments about Open End Questions over 3 waves of dataWatch for our continuation of exploring this universal survey topic, so we can arm researchers with data on pain points like drop offs and overall disengagementTopic(s) Covered:Data qualityQuestionnaire design + panelist engagementQuantitative methodology
What does it mean for a sample to be representative? How can you tell if your sample is biased and if so, what is there to do about it?In this webinar, Andrew Mercer of the Pew Research Center will present the results of the Center's research into data quality in online survey samples and what factors lead to bias in survey estimates.You will learn about different methods for adjusting samples to reduce bias and the kinds of benchmarks that are needed for these methods to succeed. Andrew will also cover strategies that can be used when no reliable benchmarks exist.
Over the last several years, a great deal has changed in terms of the types of data available for analysis; the sources for such data, the ways in which researchers acquire and analyze it, the technologies used, the industry players, and the regulatory environment, to name a few.In this webinar you will learn how to shape social intelligence through integration of unsolicited opinions ? posts from social media in multiple languages annotated with brands, sentiment, emotions and topics ? with solicited opinion - surveys and focus groups where questions are asked. A common theme will be the challenge of working with unstructured data, that is, text, images, audio, and video.
We are now less than 100 days before U.S. voters go to the polls to decide whether to stick with Donald Trump for four more years or make a change and elect Joe Biden. In this webinar, five North American pollsters will share their views on where the race stands, what seems to be the central issues, and how it's all likely to turn out.
This presentation details the exploration of various machine learning algorithms/models, which are tested on different market research studies collected using face-to-face (F2F) data collection in East African markets. The supervised machine learning techniques such as Decision Tree, Random Forest, Gradient Boosting Machine (GBM), Deep Learning and unsupervised machine learning techniques such as K-means clustering and Isolation Random Forest have been explored. The results are very promising and show great potential to bring such AI-based techniques to mainstream data quality control and quality assurance, as well as to address some of the key challenges faced by F2F data collection, which is still prominent in emerging markets, such as (East) Africa.
This is the story of a collaboration of ESOMAR Foundation/Paragon Partnership, BBC Media, Big Sofa, and experienced qualitative researchers from all over the world, coming together to solve an important industry issue - the quality of qualitative research, globally.
Find out where government limits on polling threaten researchers ability to do their jobs from the latest global study.
This paper reviews the key ethical, legal, technical and data quality challenges researchers face when working with these new data sources. Its goal is to start a conversation among researchers aimed at clarifying their responsibilities to those whose data we use in research, the clients we serve and the general public. It uses the term secondary data to mean data collected for another purpose and subsequently used in research. It expands on the traditional definition of secondary data to account for new types and sources of data made possible by new technologies and the Internet. It is used here in place of the popular but often vague term, big data, and is meant to include data from various sources, such as transactions generated when people interact with a business or government agency; postings to social media networks and the Internet of Things (IOT). It is distinct from primary data, meaning data collected by a researcher from or about an individual for the purpose of research.
With every quantitative market research study, data quality is critical. Researchers can now be confident that their quality checks are effective, with the validation of different steps that allow the identification and elimination of dishonest/inattentive participants in online surveys. In this workshop, Research Now and FactWorks will present the results of their maximum difference scaling approach in a multi-country comparison, giving practical recommendations on how you can use quality checks to eliminate participants and answers that may distort the integrity of your results.