‘Given how quick public opinion can turn based on current events, today’s frontrunner may likely be tomorrow’s goat.’
I WAS asked to join a Spaces conversation on Twitter last week to speak on surveys and how to help ordinary volunteers make sense of them. There were a lot of interesting questions and discussions points, and I’ll try to summarize those in this week’s column.
First, what is the use of surveys? In general, surveys help us understand public opinion on certain issues and should be taken as one of the barometers of public sentiment at a given time. I mentioned in the previous column how surveys are described — “snapshots in time” — as the field work or gathering of information is time-bound. It gives us a temperature reading of sorts at the time it was taken; to be indicative of a particular trend, several surveys have to be taken.
For campaign teams, surveys are a useful tool for gauging how the work is being perceived by one’s audience. Communications efforts are meant to persuade and sway the opinion of voters, and given that there are tens of millions of registered voters in the Philippines, a survey sampling is an accepted method of counter-checking the effectivity of persuasion efforts.
Second, how should ordinary folks look at surveys? Media reports usually cover what we call topline results of surveys. During election season, topline results refer to the ranking of candidates by voter preference, or during the regular season, the approval/performance/trust ratings of top government officials. However, there is so much more to surveys than the topline results that are reported, and these responses and data sets help campaign teams calibrate strategy moving forward.
As it is election season, we should expect the mushrooming of survey firms here and there.
There will be plenty, primarily because some politicians and candidates usually target that segment of voters who have a bandwagon mentality. There are a few established pollsters out there (SWS, Pulse Asia, Laylo, to name a few) that have been proven to be reliable.
The fact that these firms accept commissioned work (meaning a private person or entity has paid them to conduct a particular survey) does not mean that they are automatically suspect; these firms have established their credentials over time and in truth have other clients apart from those in the political arena.
Whenever we encounter a new survey firm, the first question we should ask is their methodology. Reputable survey firms usually explain the methodology they employ for these studies; their non-commissioned (meaning no private person or entity paid for the work specially) surveys are usually done as a public service, and the results are published on their websites. If we cannot find information on the methodology used by a certain firm, then that’s a red flag that we should watch out for. Why? If we cannot see or understand their methodology, that means we have no way of knowing if the methodology is sound or viable. Again, we cannot analyze what we cannot see.
For example, the gold standard for surveys is still the face-to-face interview method. While the pollsters in the United States have already developed a tried-and-tested phone survey method, here in the country, face-to-face remains the standard. When adopting new or hybrid methods, pollsters (at least, the credible ones) take care to preserve the randomness of the sampling, as well as other factors. This is important as faulty sampling can skew results. Imagine taking a “random” sample of 10 respondents in a particular barangay or city, knowing fully well that that particular area is a candidate’s bailiwick.
Certainly, doing so will skew results. The transparency of methodology matters.
Lastly, how should the public look at surveys? The first rule of thumb is that surveys should be compared apples to apples — meaning, the results of one conducted by Pollster A should not be compared to one done by Pollster B, given the variance in methodology. One can collate survey results over different periods done by the same pollster to find a trend (if any) then compare to a trend in another series from another pollster, but never to compare results from one firm to another and say candidate went down or up.
Survey results can be baffling, disheartening, or uplifting depending on the candidate you support. But again, the results only reflect perception at the point in time it was taken.
Given how quick public opinion can turn based on current events, today’s frontrunner may likely be tomorrow’s goat. Interestingly enough, a co-panelist pointed out that the frontrunners in the early surveys of presidential elections in past cycles failed to clinch the victory — another indication that fortunes can change just as quickly and turbulently as storm tides.