An interview can be an unnatural environment: someone comes to your house, asks a variety of questions – often very personal in nature – and then leaves. It is clear to see how this situation could lead individuals to feel uncomfortable, or compelled to modify their responses to fit perceived social expectations, especially when the topic of study is sensitive in nature. In these circumstances, researchers face a dual obligation to first make the respondent feel as comfortable as possible during the interview process, and second to ensure that they gather high quality data that reflects, as accurately as possible, the topic they are studying.
How can researchers study highly sensitive topics in a way that is unobtrusive and that makes respondents feel comfortable enough to share their true experiences and opinions without fear of judgement? One answer is to take the interviewer out of the survey process, allowing the respondent a level of confidentiality and anonymity they would otherwise not have. Self-reported responses collected through voice-assisted surveys is one way to accomplish this. Self-reporting takes away the potential discomfort and bias caused by talking directly to an enumerator, and voice assistance offers the possibility of collecting data from populations with low levels of literacy.
Laterite has teamed up with the What Works Global Programme (http://www.whatworks.co.za/) to undertake a program evaluation using voice-assisted surveys as part of a campaign to Prevent Violence Against Women and Girls. To carry out this study, Laterite and the Medical Research Council of South Africa (MRC) pioneered one of the first voice assisted self-report surveys in Rwanda. Thus far we have interviewed approximately 1,600 couples and 3,000 community members across 7 Districts using ACASI (Audio-enhanced Computer Assisted Self Interview) software on touch-screen iPods.
We sat down with Kristin Dunkle, one of the study’s co-Principal Investigators, to better understand the technology and why it was chosen for this study.
How does a voice-assisted survey work?
We used ACASI software loaded on iPod touch devices to conduct this survey. The questions were verbally recorded in Kinyarwanda and the question with response options is presented on a screen. The software reads the question and response options aloud, along with the number that corresponds with each option. The respondent then touches the appropriate answer on the screen. ACASI software is a well-established technology and has been in use in the western world for 15-20 years. Its application for facilitating self-reported surveys in low-literacy populations in the developing world, however, is more recent.
In the field, we sent out teams of enumerators like in a traditional survey. The enumerator sat with the respondent to explain the purpose of the research and ask the respondent’s consent. The interviewer then introduced the respondent to the voice-assisted survey technology through a series of practice or training questions. If the respondent felt comfortable continuing with the device, and most people did, the enumerator then allowed the respondent to listen to the survey questions and fill out the answers on the device on their own. Enumerators stayed close at hand in case of questions or technical problems, but usually the respondent was able to complete the survey on their own. In the cases where a respondent did not have a level of literacy sufficient enough to match the responses read aloud with the options on the screen, or if they asked to do a live interview, we carried out a traditional face-to-face interview.
What are the benefits of voice-assisted self-reported surveys? Do they improve data quality? What is the evidence?
There is a well-developed literature that supports using self-complete surveys for studies on sensitive or stigmatized subjects. Taking the enumerator out of the equation facilitates disclosure, making it easier for respondents to give more honest answers in response to questions around highly sensitive topics like sexuality, substance use, or violence. Similarly, it also serves to reduce bias in reporting behaviour that might be performative or socially valued.
Self-completion not only improves data quality by facilitating more accurate disclosure on the part of the respondent, it also limits the potential for bias introduced by the enumerator. Enumerator bias is a concern in most survey work. Some interviewers may be more skilled than others, some might have good or bad days, and some interviewers might get along very well with some respondents and not at all with others. By standardizing the interview process using voice-assistance, we can limit any variance in results driven by the personality, mood or skill of the interviewer.
There are two additional important benefits from voice-assisted interviews. First, these interviews allow the interview team to do a better job in ensuring privacy and confidentiality. In the densely populated areas where we are working in Rwanda, it is often difficult to find an area to carry out an interview that is guaranteed to be private. In practice, there are often people walking by or working within an audible distance of an interview. This type of system helps ensure confidentiality in respondent answers. Second, this system minimizes any discomfort or secondary trauma for interview staff who would otherwise have to listen to and record incredibly difficult stories — such as instances of gender-based violence — when conducting interviews.
Finally, there is a growing body of evidence which suggests self-completion specifically increases disclosure rates in studies on gender-based violence. While there are few examples of studies that involve randomization of the method of data collection (i.e. voice-assisted vs face-to-face) a comparison of studies asking similar types of questions using different methods often reveals higher rates of violence reported in voice-assisted, self-report surveys.
What about the challenges? Does this method increase error rates?
This method has the same advantages that other computerized systems do in preventing entry of out-of-value ranges and managing skip patterns automatically. Still, errors with this kind of survey can occur. There are some unique challenges in areas with high levels of illiteracy and low levels of exposure to technology (such as rural Rwanda). If a person is not literate at all, or is uncomfortable with new technology, they may struggle to report accurately or become frustrated. Usually, you can pick this up in the analysis phase because people who are uncomfortable, or bored, start giving the same answer to every question – usually either all the first option or all the last option.
While errors are a risk to consider, systems can be put in place to determine if a respondent has a level of literacy and comfort with the technology that is sufficient for them to accurately complete the survey. In this case, as mentioned above we had enumerators carry out a set of test-questions together with the respondents to assess their ability to carry out the survey on their own. We did have to rely on the enumerator’s assessment of the ability of each respondent to understand the questions and how to use the iPod, etc. so, in this situation, enumerator training was key.
How have people reacted so far?
Regardless of the level of a respondents’ tech-savviness and their level of literacy, it seems that respondents have found interacting with the iPod to be novel and exciting. For many respondents this was the first time they had used a touch-screen device (which we expected), and even the first time they had ever seen or worn headphones (which was a surprise to us)!
Overall, the response has been very positive, and the quality of the data, including disclosure on sensitive topics, is really excellent.