Over the last several years, a great deal has changed in terms of the types of data available for analysis; the sources for such data, the ways in which researchers acquire and analyze it, the technologies used, the industry players, and the regulatory environment, to name a few.In this webinar you will learn how to shape social intelligence through integration of unsolicited opinions ? posts from social media in multiple languages annotated with brands, sentiment, emotions and topics ? with solicited opinion - surveys and focus groups where questions are asked. A common theme will be the challenge of working with unstructured data, that is, text, images, audio, and video.
We are now less than 100 days before U.S. voters go to the polls to decide whether to stick with Donald Trump for four more years or make a change and elect Joe Biden. In this webinar, five North American pollsters will share their views on where the race stands, what seems to be the central issues, and how it's all likely to turn out.
This presentation details the exploration of various machine learning algorithms/models, which are tested on different market research studies collected using face-to-face (F2F) data collection in East African markets. The supervised machine learning techniques such as Decision Tree, Random Forest, Gradient Boosting Machine (GBM), Deep Learning and unsupervised machine learning techniques such as K-means clustering and Isolation Random Forest have been explored. The results are very promising and show great potential to bring such AI-based techniques to mainstream data quality control and quality assurance, as well as to address some of the key challenges faced by F2F data collection, which is still prominent in emerging markets, such as (East) Africa.
This is the story of a collaboration of ESOMAR Foundation/Paragon Partnership, BBC Media, Big Sofa, and experienced qualitative researchers from all over the world, coming together to solve an important industry issue - the quality of qualitative research, globally.
Find out where government limits on polling threaten researchers ability to do their jobs from the latest global study.
This paper reviews the key ethical, legal, technical and data quality challenges researchers face when working with these new data sources. Its goal is to start a conversation among researchers aimed at clarifying their responsibilities to those whose data we use in research, the clients we serve and the general public. It uses the term secondary data to mean data collected for another purpose and subsequently used in research. It expands on the traditional definition of secondary data to account for new types and sources of data made possible by new technologies and the Internet. It is used here in place of the popular but often vague term, big data, and is meant to include data from various sources, such as transactions generated when people interact with a business or government agency; postings to social media networks and the Internet of Things (IOT). It is distinct from primary data, meaning data collected by a researcher from or about an individual for the purpose of research.
With every quantitative market research study, data quality is critical. Researchers can now be confident that their quality checks are effective, with the validation of different steps that allow the identification and elimination of dishonest/inattentive participants in online surveys. In this workshop, Research Now and FactWorks will present the results of their maximum difference scaling approach in a multi-country comparison, giving practical recommendations on how you can use quality checks to eliminate participants and answers that may distort the integrity of your results.
Strategies on how to best balance expanding survey length with the need for concise, relevant and engaging surveys is explored in this paper. Innovative ways to shorten survey length without compromising the amount of business decisions that can be unearthed and accurately researched from online surveys are reviewed. The overall goal is to explore how adapting survey research improves rather than complicates the lives of both researchers and research participants. If we are not able to shorten our surveys, then survey modularisation is certainly a proven approach that can be adopted to deliver a complete, representative data set. It will also achieve accuracy and data consistency both confidently and efficiently at scale.