promoterbox

The overarching goal of this study was to better understand how the University of Washington student population interacts with promoters, and use this knowledge to improve the way information is promoted on campus. For this project, I conducted secondary research, as well as three studies: field work, interviews, and surveys. 

All research conducted attempted to answer the following questions: how do students react to promoter interactions, how do students feel about promoter interactions, and why do they react and feel the way they do? 

Read the final report.


Field Work

The field work consisted of three thirty-minute deep hanging out sessions, in which I observed different types of promoters in three different locations on campus. I recorded notes in two different forms, aiming to be as inconspicuous as possible. For the first session, I used a notebook and pencil, as this wasn’t unusual for the space I was in. For the next two sessions, watching and taking notes on paper would have been fairly obvious, so I switched to the Notes application on my iPhone to appear to be texting. 

To analyze my data, I used affinity analysis to code observations into different categories to help clarify my findings. I identified several different promoter strategies, and the different ways that students react to them.

Promoter Strategies

PromoterStrategies.png

I was surprised to find that the “student observation” strategy seemed to be the most effective, because it seemed very aggressive to me. However, after further observation, I understood that this strategy allowed for a crowd of people to form that only consisted of people who were interested in what they had to say. Once the crowd had formed, these students were more willing to talk to the promoters with clipboards. “Forced interaction” was a particularly counterintuitive strategy, because post-interaction students may resent the organization for making them feel uncomfortable and pressured. The “no interaction” strategy did not seem to anger students, or make anyone feel uncomfortable, but was extremely unsuccessful from a promoter standpoint. No one approached their table. 

With these observations and findings in mind, I developed a number of implications for design. I also reflected on the strengths and weaknesses of my field study experience. Read my full field study report.


Interviews

To begin the interview portion of my research, I kept my target audience in mind to create participant criteria. To recruit participants within given time constraints, I felt the most successful technique would be to ask people I know. I sent Facebook messages to ten of my friends, first ensuring that they met my inclusion criteria, then asking if any of them were available for a 30-minute interview sometime in the next week. Of the ten, only four were available for interviews. I selected the three that were available earlier on in the week, to allow more time to write my report. 

I developed an interview protocol to ensure consistency across interviews, and conducted a pilot test to identify anything I wanted to change before the official interviews.

I conducted three semi-structured interviews in total, and collected a set of data for each participant. Once the interviews were completed, I used affinity analysis to analyze my results. Through this, I was able to identify a general response, the reason for this response, feelings and emotions surrounding promoter interactions, and suggestions participants had to improve this experience.

Most participants had fairly negative responses towards promoter interactions. By identifying the reasons behind these negative responses, I was able to develop a number of suggestions to combat the response.

interviews

After identifying findings and implications for design, I reflected back on my interview experience and discussed what worked well and what didn’t. Read my full interview report.


Surveys

I created a survey using Google forms, and posted it on my Facebook profile, collecting twenty-six responses that met the inclusion criteria. I developed a thirteen-question survey, with two screener questions. With my field work and interview data, I still had one unanswered question: why do students respond the way they do towards promoters? The questions I brainstormed aimed to answer this question. 

To analyze results I used a Google forms Add-on tool called AwesomseTable, which let me easily cross responses from multiple questions to find new connections and correlations.

My survey findings showed a variety of interesting results, including feelings surrounding interactions with different types of promoters, different factors that impact the decision to engage, and reasons for not engaging.

survey

Once again, I created a set of implications for design based on these findings, and reflected back on the strengths and weaknesses of my survey study. Read my full survey report.


Communicating Research

To complete my research, I synthesized the findings from my three studies to create a final set of findings and recommendations. I summarized this information into a five-paged paper, stating findings in the format of proof, discussion, and design recommendation. I concluded the paper with ideas for future research. Read my final report.