Out-of-Home (OoH) media has always been in trouble regarding measurements. Although great effort, resources, money and time has been spent in many countries, the complexity of the media - due to its geographical dispersion, volume and granularity - have been a headache for most researchers. The OoH medium has been difficult to measure due to its inherent nature to spread across the country, with each location measuring differently so that averages or simple models do not suffice to show the true potential of the medium. Methods for measuring trips to the locations of OoH audiences have evolved from paper questionnaires, to telephone interviews, to computer-assisted, to tablet- assisted, to using small GPS devices carried by respondents. In all cases, it was always previously expensive and slow. Therefore, when new requirements suddenly appear, due to the digital king, it makes the current OoH methodologies appear old, even to more senior research participants. Of course, this impacts the perceptions of the media itself, as the media planning tool lacks the advantages of other media such as fresh data. We think that only with the mix of several data sources, each one contributing its strength, can we provide the accurate information that we need for our goal: a worldwide OoH audience measurement.
Agea's big data case is a reference in Latin America. The volume of known and registered audiences is the highest in Argentina. The Big Data team was created and managed from a business area, which makes it more efficient and quick responsive. In just 3 years we built a solid team, with great business successes and a strong support for the digital transformation that the Clarin newspaper needs. With almost 15 million registered profiles, we have to be very careful in the security and the way of using this asset to build trust and keep growing.
Out-of-Home (OoH) media has always been in trouble regarding measurements. Although great effort, resources, money and time has been spent in many countries, the complexity of the media - due to its geographical dispersion, volume and granularity - have been a headache for most researchers. The OoH medium has been difficult to measure due to its inherent nature to spread across the country, with each location measuring differently so that averages or simple models do not suffice to show the true potential of the medium. Methods for measuring trips to the locations of OoH audiences have evolved from paper questionnaires, to telephone interviews, to computer-assisted, to tablet- assisted, to using small GPS devices carried by respondents. In all cases, it was always previously expensive and slow. Therefore, when new requirements suddenly appear, due to the digital king, it makes the current OoH methodologies appear old, even to more senior research participants. Of course, this impacts the perceptions of the media itself, as the media planning tool lacks the advantages of other media such as fresh data. We think that only with the mix of several data sources, each one contributing its strength, can we provide the accurate information that we need for our goal: a worldwide OoH audience measurement.
As oil lead to Global Warming, data leads to Social Cooling. Comparing these two problems is not just intended as a warning. It offers hope, a blueprint for how to deal with this issue, and a deeper understanding of what it means to be human in our data-driven world.
Our goal was to conduct a study on a research project using big data and to compare the outcome of the analyses with traditional survey research. Online shopper ratings and review data in social media is an exciting and on trend data source and was compared to traditional survey data. The survey data included product tests, i.e. products were placed in-home and consumers evaluated the products using a standardized questionnaire. The objective was to derive substantive insights about the core drivers for a five-star-rating of consumer reviews in online shops or platform ratings for pet care products and compare these insights with drivers of liking from traditional research on pet care products already existing in the market. We validated the hierarchy of drivers for the overall product rating by conducting a meta analyses on previous product tests and assessed the drivers of overall liking.
Our goal was to conduct a study on a research project using big data and to compare the outcome of the analyses with traditional survey research. Online shopper ratings and review data in social media is an exciting and on trend data source and was compared to traditional survey data. The survey data included product tests, i.e. products were placed in-home and consumers evaluated the products using a standardized questionnaire. The objective was to derive substantive insights about the core drivers for a five-star-rating of consumer reviews in online shops or platform ratings for pet care products and compare these insights with drivers of liking from traditional research on pet care products already existing in the market. We validated the hierarchy of drivers for the overall product rating by conducting a meta analyses on previous product tests and assessed the drivers of overall liking.
Using lessons learned from managing mainstream brands including Budweiser and Transformers, Len Dunne will show how Elivar is using both qual and smart data to create brand awareness and plan for global domination.
In a world where there's increasing fragmentation and companies are drowning in data, having practical business solutions that connect and make sense of big data as well as humanise data, is essential. We believe collaboration is imperative and have developed a thriving partner ecosystem so that our data works harder for our clients, across more business applications. Through specialist algorithms that anonymise and aggregate data, we are able to connect our market insights and segments to large media owners and platforms such as Google, Facebook, Sensis (Digital/Yellow page business directory), Australia Post and Australia's market leading telco Telstra.
This paper reviews the key ethical, legal, technical and data quality challenges researchers face when working with these new data sources. Its goal is to start a conversation among researchers aimed at clarifying their responsibilities to those whose data we use in research, the clients we serve and the general public. It uses the term secondary data to mean data collected for another purpose and subsequently used in research. It expands on the traditional definition of secondary data to account for new types and sources of data made possible by new technologies and the Internet. It is used here in place of the popular but often vague term, big data, and is meant to include data from various sources, such as transactions generated when people interact with a business or government agency; postings to social media networks and the Internet of Things (IOT). It is distinct from primary data, meaning data collected by a researcher from or about an individual for the purpose of research.
There are 5.2bn searches on the internet every day! Discover a new big data insight approach, showing the strength of consumer connections across infinite topics.