Project Update: Reflecting on Data Collection Through a Focus Group Session

I would be willing to bet that I have participated in far more focus groups than the average person. When I was hustling to pay off my student loans in the 2010s, I discovered that the city of Atlanta had a slew of market research companies that would pay me for my time and opinions on a number of different products: restaurant foods, scratch-off lottery tickets, pilot episodes of television shows and commercials, deep sea fishing excursions, NASCAR race events, household appliances. Those are just some of the focus groups I remember participating in.

There is a bit of a dirty secret to my former success in selling my opinions to market research companies though, and I wouldn’t say that I am proud of it now. I had to lie…rather, I chose to lie in order to qualify for most of the studies that I participated in. If a market research company’s screener was recruiting for subjects to taste a new menu item for a fast-food chain that sold seafood, then they needed to find people who ate at these establishments. They would call all the people on their database list of potential subjects and work to fill quotas that typically required some diversity in the group of subjects (age, race, gender, income, etc.), but the subjects all had to have one thing in common: they had to be a target audience for the product, likely users or consumers of the product. The market research companies didn’t want vegans in their Chickfila focus groups, unless they were testing out a new non-meat option or something.

But there is a critical flaw in the way these focus groups operate. It is a problem that opens up an avenue for unscrupulous people like me (old-me, anyway) to easily exploit their system. The company that wants the focus group data typically outsources this task to third parties. Company X hires a market research firm to collect data, and that market research firm often relies on another market research company to assemble the groups of participants. (At least this is how I saw it working in the 2010s.) The real breakdown seemed to occur with relying on the initial screeners to produce a group of people who fit the specifics that Chickfila or their hired market research firm decided they wanted data from. The call screeners’ job was to find participants, and back then, I quickly realized that doing so was not always easy for them to do. Sometimes I would get through a 10-20 minute screening process (answering questions about demographics and purchasing behavior) only to have the screener say, “Sorry, you don’t qualify” or “You do qualify, but I already have enough males in their 30s.”

Experienced screeners worked more efficiently and could figure out if I qualified much faster by asking the right questions in the right order (obviously diverting from their scripted questionnaire). And the occasional desperate-for-subjects screener would be intentionally leading in their questioning: “Are you sure you haven’t bought a scratch-off lottery ticket in the past week?” I think the screeners were even incented financially to fill groups, earning bonuses for quantifiables that encouraged them to ignore quality. Either way, once I caught on to the different motivations of the parties involved, I started to play the game. I would be intentionally vague in my screener call responses, allowing “good” screeners to lead me to the desirable answers. Then I would show up to the focus group, participate as much as I could, offer legitimate and genuine feedback whenever possible, and then collect my check or gift card. Rinse, repeat.

My dishonesty involved lying about mostly my consumer behavior. It wouldn’t have been easy to lie about demographic information because much of that would be obvious when I showed up to the focus group and they verified my date of birth and address. My race would have been relatively obvious too, although, one time my wife (accomplice and fellow liar) did get herself qualified for a self-tanner product and showed up as the only non-orange person at the group! I guess she didn’t think that one completely through, but she still got her honorarium.

After participating in a focus group, my wife and I would often discuss the performance of the focus group leader. As educators, we both knew what leading a group discussion entailed and we marveled at just how incompetent some of these market research professionals were at their jobs. There were also the rare few who were excellent, but the bad ones were always worth gossiping about. In my hubris, I almost always thought I could do better. Considering my level of experience as a participant toward the middle and end of my multi-year run as a frequent focus group imposter, I was probably somewhat right.

When it came time this semester for me to collect data for my current UX project, I thought about my experience with focus groups and decided that doing a dozen interviews at once would be an efficient way to go about this step. It was time for me to see if I really could run a focus group better than the professionals I critiqued years ago.

I was prepared. I had two pages of questions all related to the generative AI system that my students-turned-informal-research-participants had used just days prior. I also had 15 years of experience leading discussions on all types of subjects with diverse groups of students, including high schoolers, college students, and even incarcerated men in a state prison. I was ready to prove myself as a market research genius.

Ultimately, I failed at achieving true focus group leader glory for two reasons:

#1. It is difficult to keep the participants on topic. My list of questions was well thought out and logical. One question or set of questions logically led to another, but my research subjects were not privy to this planning and therefore jumped ahead and backwards many times. This made it harder to elicit the feedback I was seeking. Fortunately, I recorded the session so I could sift out the data, but I believe there were lost opportunities and lost data because of the way I failed to control and redirect the respondents at times. It was a balancing act because more feedback is generally preferable to no feedback, and I didn’t want to shut the respondents down.

#2. I found it extremely difficult to avoid leading my respondents toward my own opinions and observations regarding the user experience of the product (ChatGPT). Hubris strikes again. I could not remain an impartial enough recorder of responses. I saw opportunities to seek additional feedback from respondents that would confirm my own opinions, and I couldn’t help myself.

Despite these shortcomings, I did collect data that will allow me to better assess the user experience of the target product. If I could do the focus group again, I think I would start with more open-ended questions and allow the discussion to go where the respondents lead it, rather than trying to rigidly stick to the list of questions I developed.

Leave a Reply

Your email address will not be published. Required fields are marked *