Almost everyone failed to predict the outcome of the 2016 U.S. election, and the winner came as a shock to many pollsters, the media, and people in the U.S. and around the world. How did we get it so wrong, and what does this mean for marketing and insights?
On November 29th, we’ll be exploring that very topic at our upcoming event, Predicting Election 2016: What Worked, What didn’t and the Implications for Marketing & Insights, brought to you by GreenBook and the ARF.
The event will take place from 8:30am to 11am. We’ll start with webinar with 4 short presentations related to new thinking about predicting election results and then transition to a live-streamed panel with key thought leaders and experts for a lively discussion on what we can learn from this election cycle related to tools to predict outcomes. The agenda is still coming together so look for an update on specific presenters soon, but trust us, it’s going to be very, very good.
For those in New York, we’d love to have you join us live at the ARF Headquarters in New York, but the event will be available to join virtually as well.
Register here: http://thearf.org/event/nov-29-2016-predicting-election-2016/
During this event, we won’t be rehash the polls or outcome of the election, but rather explore the implications of this polling failure for commercial research and analytics on the things that are important to our industry: trust in research (especially surveys!), new tools and techniques, predicting & modeling behavior or trends, implicit vs. explicit data sources, the application of cognitive & behavioral psychology, and more.
Now is the time to have meaningful conversations about the lessons learned from this election cycle and to apply those learnings to not only political polling, but public policy and commercial research in all of their many forms. Arguably approaches using experimental polling methods, social media analytics, behavioral economics-based analysis, “big data”, meta analysis and data synthesis, and text analytics were more predictive of the results than traditional polling, and the implications of that for other forms of research should not be ignored. Conversely, are some of the approached pioneered in commercial research for ad testing, forecasting, attribution modelling, etc.. applicable to increase the accuracy of polling?
We’ll be tackling all of these topics and more during this joint program with the ARF, so we hope you’ll join us virtually or in person for the discussion!