How Pulse Asia conducts its surveys—we asked its president to walk us thru the process 2
Photograph by Medal Elepaño

How Pulse Asia conducts its surveys—we asked its president to walk us thru the process

Ronald Holmes, president of the public opinion polling body, says their foremost challenge is often how the political elite responds to the survey results 
RHIA D. GRANA | Oct 09 2020

Political survey results often raise eyebrows and spark intense debate. The latest to cause such a reaction is the outcome of the most recent Pulse Asia survey on the performance and trust ratings of the Duterte administration, its key government officials and institutions

Like in the past, the credibility and reliability of this survey are being questioned, as well as the manner in which it was conducted. Why only get a sample of 1,200 when our adult population is over 60 million? Is this representative of the entire Philippine population? What is the selection process of the respondents? How was the survey conducted?

Finding myself asking the same questions, I sought the answers directly from Pulse Asia president Ronald Holmes. Mr. Holmes has been president of Pulse Asia since 2008. He has been with the agency since 1999 when it was established. Its founder, Pepe Miranda, was his teacher in UP, he told ANCX in an interview last year.


Operative word: random

According to Holmes, their sampling design is probabilistic, which means that they apply random sampling at every stage of the survey. “Probabilistic sampling means that every Filipino has the same chance to be selected as a respondent,” he explains to ANCX.

“The general rule before you claim that a survey is representative of the entire population is that the selection of the samples has to be random. You don’t just select certain places or certain types of individuals. If the sampling is done in a random manner, regardless of the sample size, it is representative of the entire population,” says Holmes. 

How Pulse Asia conducts its surveys—we asked its president to walk us thru the process 3
Pulse Asia Research Inc. President Ronald Holmes. Photo by Medal Elepano

Step 1: Determine sample size 

The very first step Pulse Asia does is determine the sample size, says Holmes. Why only 1,200 against an adult population of over 60 million? The Pulse Asia chief explains that this sample size is enough to achieve the confidence level of 95% with a margin of error of positive (+) or negative (-) 2.8 they are aiming for.

Having been out of statistics class for quite a while now, I ran to Google for answers. It led me to SurveyMonkey sample size calculator, which did confirm that the sample size required for a population of 60 million is 1,225.

Holmes tries to explain it in simpler terms. “When you want to test for an illness or a disease, the doctor basically takes only a small sample [of a tissue] from you and that should help him detect what exactly is afflicting you,” says Holmes.

In the same manner, he says, if you’re tasting a dish, you don’t necessarily have to consume copious amount of the dish for you to find out if it is good or bad. “It’s just that the possibility of an error in a sample is lower if the sample is larger.”


Step 2: Select areas

After determining the sample size, the next step is to randomly select the cities and municipalities where they will randomly select the barangays. In the Pulse Asia surveys, Holmes says all cities and municipalities in NCR are always part of the Ulat ng Bayan surveys. But the participating barangays would change from survey to survey.

“In Manila, for instance, there are more than 800 barangays. And since we’ve been conducting surveys over the last 21 years, it’s highly unlikely for the same barangay to be sampled in the subsequent surveys because we always pick a different barangay,” he says. The number of respondents that will participate will depend on the population of the area.

Holmes adds they make sure the surveys cover the major islands of the Philippines—Luzon, Visayas, and Mindanao.

In Pulse Asia’s regular Ulat ng Bayan surveys, they maintain the 1,200 sample size. But they sometimes increase it to 1,400 or 1,800 depending on what they would like to focus on. “If we want to read the sentiment in Southern Luzon and Northern Luzon, we’d increase the sample size from 150 to 300, for instance.”


Doing the survey

Before going to the barangays, the pollsters identify a random starting point—say, a public market or a church—before identifying the households they will pick samples from. They usually do it by intervals of five households in urban areas and two in rural areas.

When they go to a household, their researchers don’t interview the first person who opens the door, Holmes says. The interviewers first list down all the adult members of the household. The one to be interviewed for that household will depend on the last digit of the questionnaire. If the last digit is an even number, the household member that will be randomly selected is a female; if the last digit of the questionnaire is an odd number, they randomly select a male family member.

The number of interviewers Pulse Asia employs depends on how fast they want to complete the field work. The lowest number would be 120, says Holmes. “Technically speaking, you cannot compress five interviews in a day. It’s going to be draining for the interviewer,” he says, noting that in rural areas, their staff may need to travel a distance to go from one household to the next.

Even in their most recent survey, they followed the standard protocol of doing the surveys via face-to-face interview, but observing the necessary safety measures. “The rule is, there shouldn’t be anyone in the same room who is not part of the interview,” says Holmes.


Ensuring data integrity

Holmes says many things come into play to ensure the integrity of the surveys. And it starts with the questionnaire design—“making sure that the questions are not leading, do not condition the responses, are exhaustive (meaning to say, you’ve provided all the choices that one could make), and measures the opinion intended to be measured.” He points out that this is the reason why “Others” is always included in the choices.

The surveys are conducted by a pool of trained interviewers who have been doing market research and social science surveys for a long period of time, says Holmes. “They are tapped by our data collection partners. They’ve been doing these interviews not only for us but for other independent social science organizations over the years. So, reliable sila. More or less, they are seasoned interviewers,” he assures.

But still before they were sent to the field, a training is conducted on the survey instrument itself (the questionnaire). “We conduct it in three different places. We see to it that the questions are understood. We rephrase a question when the interviewers think it requires rephrasing, kasi meron na silang experience,” says Holmes.

The survey personnel’s proficiency in the language of a selected area is also a major consideration in their deployment. “The respondent has the option to take the questionnaire in Filipino, Cebuano, Ilocano, Bicolano,” he says.

During the actual fieldwork, measures are likewise taken to secure the reliability of the survey. Fifteen percent of all the interviews are supervised by a field supervisor, and more than a quarter of the unsupervised interviews are spot-checked (or randomly checked) by the field supervisors. “Let’s say I interviewed you two days earlier, someone might call you up or go to your place and ask you, nainterview ho ba kayo? The supervisor will randomly check your responses to certain questions.”

The other way is field supervisors check the filled-out questionnaire to see if there are problems or deficiencies. For instance, if majority of the questions weren’t answered—either the interview is repeated or they randomly select another interviewee to replace the previous one. But Holmes says this rarely happens.

By this time, Pulse Asia should have already been done with three out of four surveys they normally do in a year. But the travel restrictions and other safety protocols that needed to be followed posed challenges, so they had only done one so far. They are hoping to accomplish the fourth quarter survey by late November or December.

The foremost challenge they encounter almost every time, admits Holmes, is how the survey results are received by the public, “more specifically by the political elite, or by the elite themselves”—something the survey company cannot control.

Holmes, also a Political Science professor at De La Salle University, says that while they don’t sometimes agree with the survey results, they have an obligation to report it, lest they see problems on the integrity or the conduct of the survey. “But more often than not, we are sure that [we have secured its integrity],” says Holmes.