Using ChatGPT for UX Persona Creation and Canva for Matching Persona Images

 
 
I used ChatGPT 3.5 (free version) to create some personas that could theoretically apply to my UX project. After that, I asked ChatGPT to help me think about ways I could apply the different characteristics from the personas to improve the UX design of ChatGPT’s own interface and functionality. Overall, I think the feedback that I got from the machine was helpful for UX research. Some answers were a bit vague or obvious, but even the vague answers were often decent starting points. The full conversation will be included below, so feel to judge for yourself how helpful AI can be as a UX research assistant. Of course, any shortcomings of the machine responses may only be reflecting the shortcomings of the prompts and prompt writer.
 
I also used Canva to create matching images (posted above) for the personas. Using image generation was a new experience for me, and I found the results impressive for a free version. I originally tried the Bing image generator without any luck at all. That one runs on Dall-e 3 and is supposed to be excellent, but I couldn’t get it to generate anything. With Canva, it did take a bit of work and multiple “re-generation” attempts for each image to come out looking realistic. The prompts that ChatGPT gave me to feed to Canva were too long. I had to just write my own. Here is what the one for the third persona looked like:
Realistic image of 35 year old male military vet in plain clothes sitting at a laptop.
That was a revised prompt. Since the first attempt did not include “in plain clothes,” all the images were of a man in military uniform. Of course, I could have just left out the “military vet” altogether, but I wanted to see if this extra detail would have an impact on the image. Maybe the physical stature of this persona is reflective of the military vet detail, just as the glasses on the last persona are probably reflective of the “visual impairment” detail that I included. When creating these images, issues with eyes and hands seem to be especially prominent (see hands on lower third image), but it is so easy to re-generate that a user can get as picky as they want–at least within the usage limits of the free version anyway. I suppose there is value in giving a persona a visual image for the researcher to connect with, yet I don’t see this serving my own research very much. It did give me decent visuals to help spice up this post though!
 
Here is a link to my ChatGPT session (better formatting), or you can read the basic full text below:
 

UX Persona Creation and Application

Jason PalmerFebruary 23, 2024
 
Me:
Please help me by creating four different personas for user experience/user design research. The product they are using is ChatGPT. They are all college students. Two of them are traditional and two of them are non-traditional students. Two of them have disabilities that impact how they learn and use technology. They are all students at a diverse four-year college in suburban Atlanta. Provide other useful characteristics, details, and insights for each of these four different personas: things like what they might use the product for, how they might go about using it, prior experience, etc.
 
ChatGPT:

Sure, I can provide some hypothetical personas for ChatGPT users. However, please note that the following personas are fictional and are based on general assumptions and typical user behaviors, not on real individuals.

  1. Traditional Student – Natalie:
  • Age: 21
  • Gender: Female
  • Major: Psychology
  • Background: Grew up in a suburban area, comes from a middle-class family.
  • Disability: Dyslexia
  • Tech Comfort: Somewhat comfortable, uses a laptop and smartphone regularly.
  • ChatGPT Usage: Uses ChatGPT for research and writing assistance, especially for tasks that involve extensive reading or writing. Finds ChatGPT’s summarization and paraphrasing features helpful.
  • Experience: Natalie has used language assistance tools in the past and is looking for a more versatile tool that can help her with her specific learning needs. She appreciates AI tools that can understand natural language and provide accurate and concise responses.
  1. Non-Traditional Student – Jason:
  • Age: 35
  • Gender: Male
  • Major: Computer Science
  • Background: Military veteran, came back to college after serving.
  • Disability: PTSD
  • Tech Comfort: Very comfortable with technology, has experience with coding and software development.
  • ChatGPT Usage: Uses ChatGPT to assist with coding tasks, such as generating code snippets or checking for errors in his code. Also uses ChatGPT for brainstorming and project planning.
  • Experience: Jason is familiar with AI and machine learning technologies. He values efficiency and accuracy in the tools he uses. He appreciates ChatGPT’s ability to understand technical language and provide relevant information.
  1. Traditional Student – Sarah:
  • Age: 20
  • Gender: Female
  • Major: Journalism
  • Background: Grew up in a rural area, comes from a working-class family.
  • Disability: ADHD
  • Tech Comfort: Comfortable with technology, uses a laptop and smartphone regularly. Struggles with focus and organization.
  • ChatGPT Usage: Uses ChatGPT for brainstorming and organizing ideas for articles and assignments. Finds the tool’s ability to generate ideas and provide structure helpful.
  • Experience: Sarah has used other productivity tools in the past but has found them to be overwhelming or too rigid. She appreciates ChatGPT’s flexibility and ease of use.
  1. Non-Traditional Student – Michael:
  • Age: 29
  • Gender: Male
  • Major: Business Administration
  • Background: Grew up in a suburban area, comes from an upper-middle-class family.
  • Disability: Visual Impairment (Partially Blind)
  • Tech Comfort: Comfortable with technology, uses screen readers and other assistive technologies.
  • ChatGPT Usage: Uses ChatGPT for research and writing assistance, especially for tasks that involve extensive reading or writing. Finds ChatGPT’s summarization and paraphrasing features helpful.
  • Experience: Michael has used other language assistance tools in the past but has found them to be less accurate or accessible. He appreciates ChatGPT’s ability to understand natural language and provide accurate responses.
 
Me:
Now provide a summary for persona #1 (Natalie) that explains how the UX designer of an updated version of ChatGPT could apply the characteristics of the personas (especially the disability information) to an updated version of the ChatGPT interfaces (both web and mobile).
 
ChatGPT:

Based on the persona of Natalie, a UX designer for an updated version of ChatGPT could consider several key characteristics to enhance the accessibility and usability of the product for users with dyslexia:

  1. Readable Text: Ensure that the text on the interface is clear, easy to read, and uses dyslexia-friendly fonts such as OpenDyslexic or Dyslexie. The designer can also consider options for adjustable font size and contrast settings to accommodate individual preferences.
  2. Intuitive Navigation: Simplify the navigation of the interface by using clear icons and labeling, and avoid using complex or ambiguous language that might be difficult for users with dyslexia to understand.
  3. Assistive Technology Integration: Make sure that the interface is compatible with screen readers and other assistive technologies commonly used by individuals with dyslexia.
  4. Language Processing Features: Enhance the language processing capabilities of the ChatGPT to provide accurate and concise responses that are easier for users with dyslexia to understand. This might include features like summarization and paraphrasing, as well as the ability to break down complex information into simpler terms.
  5. Customizable Settings: Allow users to customize the interface according to their individual needs, such as adjusting the speed of text presentation or enabling visual cues to aid in comprehension.
  6. User Education and Support: Provide clear instructions and guidance on how to use the interface effectively, and offer accessible support resources such as tutorials or FAQs.

By taking these characteristics into account, the UX designer can create an updated version of ChatGPT that is more accessible and user-friendly for individuals with dyslexia like Natalie.

 
Me:
Now, taking into account all of the users’ usage characteristics, what specific changes to the ChatGPT interface and functionality should the UX designer consider? Please be as specific as possible, providing examples and explanations of the possible changes.
 
ChatGPT:

Certainly, let’s consider the specific changes the UX designer of ChatGPT might want to consider for each of the personas mentioned earlier:

  1. Natalie (Traditional Student with Dyslexia):
    • Interface Changes: Increase font size options, provide a dyslexia-friendly font option, offer adjustable contrast settings, and minimize visual clutter.
    • Functionality Changes: Introduce a summarization feature that condenses lengthy text responses into key points, offer an option to paraphrase complex or jargon-filled responses into simpler language, and provide a spell-check feature that highlights potential spelling errors.
    • Example: Instead of displaying a long paragraph in a single block of text, the interface could provide a summarized version with bullet points, using simpler language and shorter sentences.
  2. Jason (Non-Traditional Student with PTSD):
    • Interface Changes: Provide a dark mode option for users who are sensitive to bright screens, offer customizable notification settings, and implement a progress tracker for long tasks to help manage focus and anxiety.
    • Functionality Changes: Introduce a distraction-free mode that hides non-essential elements of the interface, incorporate a timer feature to encourage breaks during long sessions, and add a feature that allows users to save and resume tasks.
    • Example: The interface could include a “Focus Mode” that hides the chatbox and displays only the current task, with a timer and progress bar visible to help Jason track his work.
  3. Sarah (Traditional Student with ADHD):
    • Interface Changes: Implement a clear and hierarchical layout, offer color-coded categories for tasks, and provide a task list feature with the option to set reminders.
    • Functionality Changes: Introduce an auto-save feature to prevent data loss due to distraction, include a focus mode that temporarily disables notifications, and add a “Brainstorming Mode” that helps organize ideas into a structured outline.
    • Example: The interface could have a “Task Manager” feature that displays Sarah’s assignments and allows her to set deadlines and receive reminders, helping her stay organized and on track.
  4. Michael (Non-Traditional Student with Visual Impairment):
    • Interface Changes: Ensure the interface is compatible with screen readers and supports keyboard navigation, provide a high-contrast mode, and offer adjustable magnification options.
    • Functionality Changes: Implement a voice command feature that allows users to navigate the interface hands-free, offer an audio summary option for lengthy responses, and include keyboard shortcuts for common actions.
    • Example: The interface could have a “Voice Control” feature that lets Michael navigate the interface and access functions using voice commands, reducing the need for manual interaction.

By considering these specific changes tailored to the needs of each persona, the UX designer can create an updated version of ChatGPT that offers a more accessible and user-friendly experience for everyone, taking into account their usage characteristics and disabilities.

 
Me:
Let’s now assume that our different personas all have different cultural backgrounds. What considerations does this place on the UX designer?
 
ChatGPT:

When considering the cultural backgrounds of the personas using ChatGPT, the UX designer should take into account the following considerations:

  1. Language and Communication Styles: Different cultures have unique language and communication styles. For example, some cultures may prefer indirect communication, while others may value directness. The UX designer should ensure that ChatGPT’s language models are trained to understand and respond appropriately to these different styles.
  2. Visual Design and Color Schemes: Colors and visual design elements can have cultural significance. For example, certain colors may have positive or negative connotations depending on the culture. The UX designer should consider cultural sensitivities when choosing color schemes and visual design elements for the interface.
  3. User Preferences and Expectations: Different cultures may have different preferences and expectations when it comes to technology and user interfaces. For example, some cultures may prefer minimalist designs, while others may prefer more visually rich interfaces. The UX designer should conduct user research to understand the preferences and expectations of the personas from different cultural backgrounds.
  4. Content and Content Localization: Content on the ChatGPT interface should be culturally sensitive and inclusive. The UX designer should consider the cultural context of the content and ensure that it is appropriate and relevant to all users, regardless of their cultural background.
  5. Accessibility and Inclusivity: Cultural inclusivity goes hand-in-hand with accessibility and inclusivity. The UX designer should ensure that ChatGPT’s interface is accessible to users from different cultural backgrounds, including those with disabilities, by implementing features such as multilingual support, culturally diverse avatars, and user-friendly interfaces.

Overall, the UX designer should take a culturally sensitive and inclusive approach when designing ChatGPT’s interface, taking into account the unique needs, preferences, and expectations of the personas from different cultural backgrounds. This will help create a more inclusive and user-friendly experience for all users.

 
Me:
Let’s assign different cultural backgrounds to our personas.
 
ChatGPT:

Sure, let’s assign different cultural backgrounds to our personas:

  1. Traditional Student with Dyslexia – Natalie:
    • Cultural Background: Natalie comes from a culturally diverse family with Hispanic roots. She speaks both English and Spanish fluently.
  2. Non-Traditional Student with PTSD – Jason:
    • Cultural Background: Jason grew up in a small town in the Southern United States and has a strong sense of Southern culture and traditions.
  3. Traditional Student with ADHD – Sarah:
    • Cultural Background: Sarah comes from a multicultural background. Her mother is Chinese, and her father is African American. She is fluent in English and conversational in Mandarin.
  4. Non-Traditional Student with Visual Impairment – Michael:
    • Cultural Background: Michael is originally from the Philippines and moved to the United States for college. He is fluent in English and Tagalog, the national language of the Philippines.

With these diverse cultural backgrounds in mind, the UX designer should consider the specific cultural sensitivities, communication styles, and preferences of each persona when designing ChatGPT’s interface. This will help create a more inclusive and culturally sensitive experience for all users.

 
Me:
Now lets make some UX improvement suggestions for each of the personas based on cultural background.
 
ChatGPT:

Certainly! Here are some UX improvement suggestions for each of the personas based on their cultural backgrounds:

  1. Natalie (Hispanic Background):
    • UX Improvement: Offer a language option for Spanish-speaking users like Natalie. ChatGPT could have an option to switch between English and Spanish language modes, allowing users like Natalie to communicate more comfortably in their preferred language.
  2. Jason (Southern United States Background):
    • UX Improvement: Incorporate elements of Southern culture into the interface design, such as using warm, earthy tones and friendly, informal language. This could help Jason feel more at home while using the product.
  3. Sarah (Multicultural Background):
    • UX Improvement: Provide a culturally diverse set of avatars and user interface elements that reflect Sarah’s multicultural background. This could include avatars with different ethnicities and customizable design elements that allow users to express their unique cultural identities.
  4. Michael (Filipino Background):
    • UX Improvement: Integrate multilingual support for Tagalog-speaking users like Michael. ChatGPT could include an option to switch between English and Tagalog language modes, allowing users to communicate more comfortably in their preferred language.

These UX improvements take into account the unique cultural backgrounds of each persona and aim to create a more inclusive and culturally sensitive experience for all users.

 
Me:
Do you have any other suggestions for how a UX designer could use these personas to improve ChatGPT? Is there anything we may have missed in the creation of the personas that could lead to a better product and experience for the diverse range of ChatGPT users?
 
ChatGPT:

Certainly! Here are some additional suggestions for how a UX designer could use these personas to improve ChatGPT:

  1. Cultural Sensitivity Training for AI: To ensure that ChatGPT’s responses are culturally sensitive and appropriate, the UX designer could work with AI trainers to provide cultural sensitivity training to the language models. This could involve incorporating diverse cultural perspectives and values into the training data and ensuring that the language models understand and respect cultural differences.
  2. User Research and Feedback: The UX designer could conduct user research and gather feedback from users from diverse cultural backgrounds to understand their specific needs, preferences, and pain points. This could involve conducting focus groups, interviews, surveys, and usability testing with users from different cultural backgrounds.
  3. Localization and Translation: To make ChatGPT more accessible to users from different cultural backgrounds, the UX designer could work on localizing the interface and content for different languages and regions. This could involve translating the interface and content into multiple languages and adapting the design to suit the cultural norms and preferences of different regions.
  4. Cultural Contextualization: The UX designer could work on contextualizing ChatGPT’s responses to better reflect the cultural context of the user. For example, if a user asks a question about a cultural tradition or practice, ChatGPT could provide a response that takes into account the cultural context and significance of the tradition or practice.
  5. Inclusive Design and Accessibility: To ensure that ChatGPT is accessible to users from diverse cultural backgrounds, the UX designer could work on making the interface and content more inclusive and accessible. This could involve incorporating features such as text-to-speech, screen readers, and voice commands to make ChatGPT more accessible to users with disabilities.

By taking these suggestions into account, the UX designer can create a more inclusive and culturally sensitive experience for all users of ChatGPT, regardless of their cultural background.

 
Me:
Can you give me a good prompt to use for an image generator that could provide me with images of each of these personas? Include the relevant details the image generator would need.
 
ChatGPT:

Certainly! Here’s a good prompt for an image generator that could provide you with images of each of these personas:

Prompt: Generate a set of diverse avatar images representing the following personas:

  1. Natalie – A 21-year-old female college student majoring in Psychology. She is of Hispanic descent, has dyslexia, and speaks both English and Spanish fluently.
  2. Jason – A 35-year-old male college student majoring in Computer Science. He is a military veteran with PTSD and has a strong sense of Southern culture and traditions.
  3. Sarah – A 20-year-old female college student majoring in Journalism. She is of mixed Chinese and African American heritage, has ADHD, and is conversational in Mandarin.
  4. Michael – A 29-year-old male college student majoring in Business Administration. He is originally from the Philippines, has visual impairment, and is fluent in English and Tagalog.

Details Needed:

  • Gender: Male or Female
  • Age: 20-35
  • Ethnicity/Cultural Background: Hispanic (Natalie), Southern United States (Jason), Mixed Chinese and African American (Sarah), Filipino (Michael)
  • Disabilities: Dyslexia (Natalie), PTSD (Jason), ADHD (Sarah), Visual Impairment (Michael)
  • Major/Field of Study: Psychology (Natalie), Computer Science (Jason), Journalism (Sarah), Business Administration (Michael)
  • Languages Spoken: English, Spanish (Natalie), English (Jason), English, Mandarin (Sarah), English, Tagalog (Michael)

The image generator could use these details to create a diverse set of avatar images that accurately represent each of the personas.

 
Me:
Please abbreviate that significantly and ask for just one image showing them with handwritten name tags on their shirts.
 
ChatGPT:

Certainly, here’s a more concise version of the prompt:

Prompt: Generate one image of four avatars, each representing a college student with a handwritten name tag on their shirt. Each student should be visibly different, representing the following personas:

  1. Natalie: 21-year-old female with Hispanic background, dyslexia, Psychology major, speaks English and Spanish.
  2. Jason: 35-year-old male with Southern United States background, PTSD, Computer Science major, English speaker.
  3. Sarah: 20-year-old female with mixed Chinese and African American heritage, ADHD, Journalism major, speaks English and Mandarin.
  4. Michael: 29-year-old male from the Philippines, visual impairment, Business Administration major, speaks English and Tagalog.

Details Needed:

  • Gender: Male or Female
  • Age: 20-35
  • Ethnicity/Cultural Background: Hispanic (Natalie), Southern United States (Jason), Mixed Chinese and African American (Sarah), Filipino (Michael)
  • Disabilities: Dyslexia (Natalie), PTSD (Jason), ADHD (Sarah), Visual Impairment (Michael)
  • Major/Field of Study: Psychology (Natalie), Computer Science (Jason), Journalism (Sarah), Business Administration (Michael)
  • Languages Spoken: English, Spanish (Natalie), English (Jason), English, Mandarin (Sarah), English, Tagalog (Michael)

Please provide a single image showing all four avatars, each with a handwritten name tag on their shirt.

 

 

 

Thoughts on the Ethics of Offering Incentives for Surveys and Questionnaires

At the end of each semester, my students are provided with a link to a survey seeking their feedback regarding my performance as an instructor for their course. This is standard practice at many institutions and typically organized and/or administered at the direction of an institutional effectiveness office, not the instructors themselves. Although the students at my institution receive an email notification that their participation is needed/requested (and a reminder email or two), the emails include only a vague (meaningless?) incentive of institutional improvement connected to the request.

Because an educational institution should theoretically value higher levels of participation in end of course surveys, and because the level of my own students’ participation can be viewed (unfairly) as an indirect reflection of my professional capabilities, I have often wondered whether I should provide some type of incentive for my own students to complete their evaluation surveys. I am aware that some of my colleagues do provide extra credit for end of course survey completion, but this strikes me as somewhat unethical because a student (i.e. research participant) who is aware of a reward attached to their participation could then have a different outlook on the subject of the research or the research itself. In other words, rewards make people happy. Rewards change people’s moods and perceptions. Rewards, then, can introduce a type of subtle bias. This is why I have typically eschewed rewards to encourage participation in my end of course surveys. But this doesn’t necessarily mean that I have eliminated bias. Not at all.

When people (like students) come to expect rewards for completing surveys, researchers could potentially create a bias simply by not offering some kind of expected incentive for a person’s time and thoughts. If a person feels like they are being compelled to complete a survey without any promise of a reward, they may be unhappy/annoyed/frustrated about doing so, and those kinds of feelings could potentially filter into their survey responses—perhaps even unconsciously.

By not offering my own students rewards for their participation in my end of course surveys, I do fear that I am impacting their responses in an unintended way connected to their emotional state when completing the survey. This is made worse because of my colleagues who do offer incentives and thereby create a stronger expectation on the part of students, an expectation that is naturally followed with disappointment when not met.

Instead of being “damned if I do, damned if I don’t,” I am more “favored if I do, damned if I don’t” when it comes to providing incentives for survey participation. But either way, I would argue that the data is damned.

If the principles of offering (and refraining from) survey incentives which I have laid out for my own course surveys hold generally true for all research efforts that include voluntary data collection, then it seems the researcher concerned with participant bias is faced with a no-win situation: provide a reward and introduce a favorable response bias -or- refrain from providing a reward and introduce a potential unfavorable response bias.

This matter is further complicated by factors such as socioeconomic status since those who are more in need of incentives may be more susceptible to bias (in my case, students with lower grades and more in need of extra credit would be impacted differently by rewards than those who do not need them). A 2019 study supports the idea that “There is also a risk that incentives may introduce bias, by being more appealing to those with lower socioeconomic status.” Although that particular article is concerned with bias in terms of the sample generated, the connection between incentives and participants’ attitudes/responses should be a matter of concern as well.

So what is an ethical researcher to do?

At Georgia State University, where I am a graduate student, the course evaluation process doesn’t exactly provide an incentive for students who complete instructor evaluations. Instead, they withhold a student’s ability to view their final grades until they cooperate and submit their reviews.

This doesn’t seem like the most ethical way of solving the participation problem, but I do imagine it is effective at generating high response rates without costing the institution a nickel. A better approach, I think, is working harder to educate students (research participants) about the risks involved in providing incentives and punishments when trying to encourage participation in research. This at least gives students some answers as to why an incentive is not being offered and hopefully garners more credibility for the study in the eyes of the people who are being asked to participate. I have taken to this approach with my course evaluations, and I think it may be somewhat effective at preventing the bias I am seeking to eliminate. It is not, however, the most effective way to increase response rates.

 

Implications for my UX research

There is probably no perfect system for achieving optimal survey participation while also eliminating bias, but researchers can at least be thoughtful and careful about their efforts. Although I have not designed my final UX course project yet, I can imagine the need for survey responses to inform my efforts. Perhaps I will simply try to encourage participation by explaining the virtue of my educational field work and also explaining the problems associated with offering incentives. Perhaps I will just offer incentives to participants and carefully explain this choice within my report. Another option might be to try to diffuse the potential bias created by an incentive by adding a question at the beginning of the questionnaire like the following:

“Could providing incentives to participate in surveys make respondents less honest (i.e. more positive) in their answers?”

Such a question would at least force the participants to consider the issue right before they answer other questions where their unbiased feedback is sought. I can see how creating that kind of basic awareness could encourage less biased, more honest responses.

 

Other readings and notes

Most sources I found related to this issue deal more with introducing bias in the sample. I didn’t find any that specifically deal with the quality of responses being impacted by incentives.

This article found that incentives provided upfront work better than incentives provided after participation. I would have expected the opposite, and I suppose that is me not believing the best in people.

“Prepaid incentives (unconditional incentives) are more effective in increasing response rates in comparison to payment after survey completion (conditional incentives) [6465]. This could be explained by social exchange theory as providing participants monetary incentives in advance encourages them because they feel they should reciprocate for the reward they receive by completing the survey.”

This is decent write up of different types of bias in surveys and surveying.

ChatGPT and claude.ai reflect mostly what I found (or didn’t find) when looking into this issue, but when pressed a bit, ChatGPT gave me something closer to what I was looking for in terms of response bias. Of course, just because it gave me what I was after, I wouldn’t trust it without further verification from some better human sources. (These generative AI systems are no solution for confirmation bias, and I do fear they might make that even problem worse.)

UX Research Notes: Using Screen Recording to Capture ChatGPT Usage

On a Friday morning I recruited two family members to participate in a brief research activity using ChatGPT. I used screen recording through Microsoft Teams to capture the participants’ interactions with the free version of OpenAI’s ChatGPT tool (ChatGPT 3.5). My goal as the researcher was exploratory in nature, seeking some feedback on the interface and functionality of the ChatGPT desktop web application.

Instead of simply providing the participants access to the ChatGPT website without any defined purpose for interacting with it, I designed an activity to encourage them to make use of the tool. The instructions for the activity were as follows:

Instructions: Use the generative-AI tool (ChatGPT or claude.ai) to engage in a virtual debate. You can ask the AI tool to be your debate partner and for suggestions on a topic, or you may suggest a topic. The goal is to “win” the debate.

*Note: topics related to recent current events don’t work well because the generative AI tools do not have access to information after 2021 (ChatGPT) and 2022 (claude.ai). 

Both users were given this same information on a sheet of paper and verbally informed of the purpose for the activity. They were not given time parameters for completing the activity, and were instead told to take as much time as they needed to complete the activity, or they could stop when they were bored or felt like they could no longer make progress toward their goal.

The two participants were significantly different in terms of demographics. This seemed desirable because younger and older tech users sometimes have different approaches and attitudes toward interacting with technology. Here are the participant profiles:

Participant 1 (P1): 10-13 years old, white male, from a diverse suburb of Atlanta, public school educated, never used ChatGPT or other generative-AI platforms.

Participant 2 (P2): 35-40 years old, white female, from a diverse suburb of Atlanta, college educated, very limited past use of ChatGPT—maybe 2 or 3 times total for work and curiosity reasons.

Results Summary

P1 engaged with ChatGPT for over 20 minutes, exploring different topics he proposed, then asking for a suggestion for a topic from ChatGPT which he debated, and finally proposing one last topic himself which he also debated.

  • Interactions by the numbers: 14 typed prompts/replies, average length 20 words.

P2 engaged with ChatGPT for 10 minutes, focusing on only one topic which she proposed.

  • Interactions by the numbers: 9 typed prompts/replies, average length 13 words.

UX Observations

Both participants found the system to be intuitive and easy to use.

Neither participant made use of the small icons below each ChatGPT response (thumbs down, regenerate, copy). When asked why, participant 2 said she never uses those features (especially the “like/dislike” buttons) because she doesn’t care to inform the algorithms used for determining the types of content she is fed on social media sites.

Both participants commented that the speed at which ChatGPT responses displayed and auto-scrolled was too fast. From Participant 1:

“I would have the I would have chat GPT pull up its arguments slower because it feels like a lot to read it. The pace felt really, really fast. It was almost like stressful to read them. And I’m really fast reader, and I still couldn’t read it all.”

Although I did not want to interrupt the session, I was curious to know if participant 1 was reading the full responses generated by ChatGPT, so I asked him in the middle of his session. He replied that he was just “skimming” them. The fact that he chose to skim was likely because the responses were autos-scrolling quickly and somewhat lengthy compared to the short prompts/replies he was typing.

Both participants also said they liked the disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Consider checking important information.” But both participants did not notice this until the end of their session. Neither was sure if it was there at the beginning and participant 2 suggested it ought to be a pop-up at the beginning that must be X’ed out by the user.

Participant 1 suggested custom font options, especially in terms of size for readability.

Participant 1 also suggested the ability to input prompts/replies with speech and an option to have ChatGPT’s responses read out loud to the user.

Final Thoughts

Using the Teams screen recording option was easy, and the transcript proved useful despite its inaccuracies. I would use Teams in the future for similar data collection and the recording of post-use interviews.

In the future, I would encourage participants to verbalize more of their thoughts during the use of the product. I was able to get some good feedback after my participants completed their tasks, but more in-the-moment feedback would give me more material to work with in a post-use interview.

Using MS Teams Screen Recording for UX Research

When conducting UX research on digital artifacts, like websites or mobile apps, UX designers can benefit greatly from seeing how a user interacts with the product in real time. Researchers/designers rely on data to make decisions about how to improve a product, and a complete look at a user’s interactions can provide insightful data points. In the past, designers who wanted this kind of data may have been limited to looking over a user’s shoulder as they interacted with a product, or they may have relied on post-use reports in text formats which, although helpful, would have been lacking any kind of “soft” data that can be gleaned from observing a user in action.

Today, there is no shortage of tools for UX researchers to choose from when it comes to screen recording. The choices are overwhelming and even finding reliable comparisons is tricky as so many of the reviews are sponsored by the companies who are selling them—sometimes in misleading ways that are hard to detect. This one, for example, has nothing negative at all to say about Camtasia. This comparison here seems a bit more reliable and points to Camtasia’s high upfront costs as a drawback.

Since evaluating and testing new screen recording tools is such a time consuming process, it might make sense for a UX researcher to simply rely on the platforms they already know and use for other personal and professional reasons. Ubiquitous tools like Zoom and Apple’s facetime allow for screen sharing and Zoom has recording options as well. (Facetime recording is possible by adding an app to the device.)

One productivity platform used by over 300 million people around the world is Microsoft’s Teams. It too has screen sharing and recording capabilities. Since this is a platform I have been required to use in the past and have come to know a bit better than other options, I decided to review it through the lens of a UX researcher/designer. My opinion is that Teams is an effective and reasonably easy to operate option for screen sharing and recording, especially for people who already have access through institutions and organizations that are already paying the licensing fees to use the product.

Below are some more detailed thoughts and a demo of using Teams for UX research purposes.

 

There are at least two ways to start a new session: “Meet Now” and “Add New Meeting”, but if the purpose is to get more than one person together with some advanced notice, the “Add New Meeting” option provides the means to invite others and directly syncs with their Outlook calendars (assuming they have and use the related MS Outlook for email). The other “Meet Now” option works quickly and allows the meeting creator to invite users to a meeting in progress. Either way, it is relatively easy to invite others to a meeting session. If the guests are also part of the organization, they can be invited with a search feature. If the invited guests are not part of the organization, they can still join, but the meeting creator must send them a link.

I am sure as a UX researcher, an important goal is to make the process of joining a session as easy as possible. As Teams continues to gain market share as a productivity platform for organizations, I think it could overtake Zoom and others as the platform of choice for people who want to record screen sharing sessions. Using the most well known and most used platform is a good move for a UX researcher, even if doing so sacrifices some functions available from other tools. Research participants should not have to download new apps and learn new interfaces. Placing those barriers in the way will only make it harder for a UX researcher to get willing subjects.

Because of Team’s design as meeting facilitating tool, a meeting session allows multiple people to join and view the meeting. I have been in meetings with around 50 people before. I don’t know how many one meeting can handle, but I would imagine other platforms could be more limited than Teams in this regard.

It is important to note that Teams meetings, like Zoom calls, can be run from a browser or an installed app. I have found this feature especially helpful when one or the other gives me issues. It’s nice to have the option to switch, usually from the app to the browser in my case. Of course, for invited guests who do not have the Teams app, the browser option should still allow them to join and participate.

I did notice as I created a demo meeting using “Add New Meeting” for this review that there was an option to “record automatically.” That seems helpful, because I have had sessions before that I meant to record and forgot. A pop-up once the meeting is started that asks if the user would like to record would be even better, but I do not think I have seen this happen before. Of course, the downside to the pre-meeting record setting is that one might get caught unaware of being recorded after the meeting has already started. 

Teams meetings offer users dozens of background filters that range from a simple blurred background to professional looking office backgrounds to distracting animations. These are helpful for privacy purposes and add an element of fun as well, but the animated backgrounds could pose problems for researchers and participants who need to focus on a specific artifact.

For a look at some of the features of Teams screen recordings offer during a session, I created the following demo:

UX Demo 3 with Jason Palmer-20240202_152729-Meeting Recording.mp4

I get the feeling that most screen recording tools are about the same. It might make sense for UX researchers to just pick one they are already somewhat familiar with (like Teams, or Zoom) and learn how to maximize its utility–as opposed to spending their time shopping around and trying to learn new interfaces and functions.

For me, Teams seems like a competent and effective tool with enough versatility for most UX research purposes.