Book Review: AI and UX: Why Artificial Intelligence Needs User Experience

AI and UX: Why Artificial Intelligence Needs User Experience, by Gavin Lew and Robert M. Schumacher Jr. 1st edition 2020, Apress. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&AuthType=ip,shib&db=cat06559a&AN=ggc.996791144102945&site=eds-live&scope=site.

Why did I choose this book?

My current research interests revolve around all things generative AI, especially its applications for writing and teaching. I am trying to learn as much as I can, and I think an understanding of the UX aspects related to AI is beneficial. This led me to seek a UX book that might provide insight into how AI systems and products are designed with end users in mind.

Because the release of ChatGPT in November of 2022 drastically altered the entire AI landscape, I was hoping to find a recently released book on UX and AI, but I was unable to find one that looked like legitimate scholarship. This is not entirely surprising since it takes a while to research, write, and publish. So instead, I settled for the most recent book on the topic I could find that looked reliable. There were not many options to choose from, but this book, despite its 2020 copyright date, proved a good read for someone like me who is neither an expert in UX or AI (yet).

Who are the authors?

The authors for this book are Gavin Lew and Robert Schumacher Jr. Instead of trying to summarize their already relatively short biographies in the book, I will include them in full:

Gavin Lew has over 25 years of experience in the corporate and academic environment. He founded User Centric and grew the company to be the largest private UX consultancy in the United States. After selling the company, he continued to lead a North American UX team to become one of the most profitable business units of the parent organization. He is a frequent presenter at national and international conferences and the inventor of several patents. He is an adjunct professor at DePaul and Northwestern universities. Gavin has a Masters in Experimental Psychology from Loyola University and is currently the Managing Partner of Bold Insight, part of ReSight Global, a globally funded UX consulting practice across North America, Europe, and Asia.

 Robert M. Schumacher Jr. has more than 30 years of experience in academic, agency, and corporate worlds. He co-owned User Centric with Gavin from its early stages until it was sold to GfK in 2012. While at User Centric, Bob helped found the User Experience Alliance, a global alliance of UX agencies. Also, he founded User Experience Ltd, a UX agency in Beijing. He is co-founder, co-owner, and Managing Partner of Bold Insight, part of ReSight Global, a global UX company. Bob was the editor of and contributor to The Handbook of Global User Research (2009). He has several patents and dozens of technical publications, including user interface standards for health records for the US government. He also is an Adjunct Professor at Northwestern University. Bob has a Ph.D. in Cognitive and Experimental Psychology from the University of Illinois at Urbana-Champaign.

Basically, we have two writers with plenty of UX experience (related to technology and other fields) and backgrounds in psychology. It might have been nice to have an author with more of a computer-science background paired up with someone who knows the psychology behind UX, but these two authors also have a long-established working relationship which enhances their ability to communicate throughout the book.

Summarizing the Chapters and Some Highlights:

In the preface, the authors state the following:

“Our perspective on how AI can be more successful is admittedly and unashamedly from a UX point of view. AI needs a focus on UX to be successful.”

This is a central theme in the book. The authors recognize the role UX must play in the development of AI systems, tools, and interfaces. Having now had some experience myself with a few of the generative AI platforms, I think the authors are correct, and an emphasis on UX for AI tools won’t just make those tools easier and more pleasant to use, but a better UX experience can actually save these tools from being written off by the general public as novelties or passing fads. The failure of AI to live up to hype in past decades did lead to these kinds of dismissals, but the latest wave of advancements may have reached a tipping point that insulates AI from another major cultural setback or lengthy pause.

Chapter 1: Introduction to AI and UX

This chapter does a respectable job of making the important connections between UX and AI. The authors prove that they know enough about these connections to be credible voices from which the reader can learn.

Drawing from their significant UX work, Lew and Schumacher tell us that “For any product, whether it has AI or not, the bare minimum should be that it be usable and useful. It needs to be easy to operate, perform the tasks that users ask of it accurately, and not perform tasks it isn’t asked to do. That is setting the bar really low, but there are many products in the marketplace that are so poorly designed where this minimum bar is not met” (16).

Throughout the book, the authors make a good case for the application of pretty much all general UX principles to AI products. Chapter 1 just lays out the landscape and major connections.

Chapter 2: AI and UX: Parallel Journeys

As the title implies, Chapter two provides a nice historical walk through AI and UX development. Particularly interesting is the focus on “AI winters” that followed periods of overhyped AI performance in the 1960s and again in the 1980s.  Also, they mention the “domain-specific AI winter” for AI personal assistants which followed the overhyping of Siri in the early 2010s.Part of the reason for these AI winters is that the developers of the systems were not focused enough on user experience.

I appreciate the differentiation the authors try to make between HCI (human-computer interaction) and UX in chapter 2:

“Where HCI was originally focused heavily on the psychology of cognitive, motor, and perceptual functions, UX is defined at a higher level—the experiences that people have with things in their world, not just computers. HCI seemed too confining for a domain that now included toasters and door handles. Moreover, Norman, among others, championed the role of beauty and emotion and their impact on the user experience. Socio-technical factors also play a big part. So UX casts a broader net over people’s interactions with stuff. That’s not to say that HCI is/was irrelevant; it was just too limiting for the ways in which we experience our world” (50).

The way I interpret this is that HCI is akin to a substrata of UX.

Chapter 3: AI Enabled Products are Emerging All Around Us

And

Chapter 4: Garbage In, Garbage Out

These two chapters are where the book shows its age a bit as a pre-ChatGPT publication. Although there are some interesting examples of AI systems discussed in Chapter 3, the next chapter disconnects enough from the user experience that I did not find it valuable as a UX text. The focus of Chapter 4 is the data that AI runs on. The authors are correct that without quality data, the user experience of any AI product will suffer, but since current AI systems are such black boxes when it comes to their training data, this is somewhat of a moot point for me right now.  

I will say that I perked up a bit reading about voice assistants and Grice’s four maxims for communication (67). Anyone studying generative AI could benefit from using those maxims as a starting point for evaluating what our machines are capable of. Current LLMs and systems based on LLMs seem to handle the three of the maxims with relative ease much of time (quantity, relevance, and clarity), but the truthfulness of LLM’s communication is where many people are finding the most problems. One could argue that truthfulness is the most important of the four, but it is obvious that advances in the other three areas have come quickly and impressively. I think it is entirely possible that AI systems make progress on that fourth maxim in the near future. And if things in the AI world are not interesting enough for someone yet, they will be once the programs are more reliably accurate purveyors of information.

Chapter 5: Applying a UX Framework

This final chapter is still relevant in the post-ChatGPT world. It ties the idea of UX and AI back together (whereas they diverged a bit in the previous two chapters). This quote at the beginning of the chapter seems especially relevant:

“For many people, there’s still a hesitance, a resistance, to adopt AI. Perhaps it is because of the influence of sci-fi movies that have planted images of Skynet and the Terminator in our minds, or simply fear of those things that we don’t understand. AI has an image problem. Risks remain that people will get disillusioned with AI again” (109).

I think the authors are correct that people could become disillusioned with AI again, but this will probably be less about the UX dimension and more about the existential threats, security concerns, and intellectual property issues that accompany 21st century AI. Either way, since AI is becoming so ubiquitous, I would not predict another AI winter like authors detail in Chapter 2.

One of the most interesting points in Chapter 5 regards the purpose of a product. As they lay out the case for applying a UX framework to AI, the authors pose the following questions:

“Probably the most important thing that defines any application is what it does—we call this “utility” or “functionality” or “usefulness.” Basically, is there a perceived functional benefit? In more formal terms, does the application (tool) fit for the purpose it was designed for?”

The reason this is interesting is because I am not sure the creators of ChatGPT and the other generative AI systems (or any of the precursors dating back to the 1960s) really had a specific end user functions in mind—at least not as the driving motivation for their creations. It seems like the systems have all been designed just to see if the creators could make a machine that could communicate like a human and display some level of “intelligence.” Along the way, clever people have figured out how to leverage this technology for different purposes, and profit-driven people have too, but I really don’t think that thoughts of the usefulness of LLMs weighed heavily on the creator’s minds. Evidence for this exists within the current user experience of ChatGPT. When users first access this application they see an interface with suggestions for how the app could be used. That is weird.

When we buy tools or access technologies, we typically already have the function in mind; that’s why we sought the tool to begin with. Generative AI companies are almost saying to the user, “Here it is. Figure out for yourself what purpose it has for you.” For the time being, that is the user experience for many users of generative AI.

As for the user experience of Lew and Schumacher’s book, I think they did a decent job of connecting two fields that need to be connected. A reader with a good grasp on AI could probably skip chapters 3 and 4, but there is plenty of helpful information and background in Chapters 1,2, and 5 that still holds up well in this four-year-old title from Springer/Apress.

Using ChatGPT for UX Persona Creation and Canva for Matching Persona Images

 
 
I used ChatGPT 3.5 (free version) to create some personas that could theoretically apply to my UX project. After that, I asked ChatGPT to help me think about ways I could apply the different characteristics from the personas to improve the UX design of ChatGPT’s own interface and functionality. Overall, I think the feedback that I got from the machine was helpful for UX research. Some answers were a bit vague or obvious, but even the vague answers were often decent starting points. The full conversation will be included below, so feel to judge for yourself how helpful AI can be as a UX research assistant. Of course, any shortcomings of the machine responses may only be reflecting the shortcomings of the prompts and prompt writer.
 
I also used Canva to create matching images (posted above) for the personas. Using image generation was a new experience for me, and I found the results impressive for a free version. I originally tried the Bing image generator without any luck at all. That one runs on Dall-e 3 and is supposed to be excellent, but I couldn’t get it to generate anything. With Canva, it did take a bit of work and multiple “re-generation” attempts for each image to come out looking realistic. The prompts that ChatGPT gave me to feed to Canva were too long. I had to just write my own. Here is what the one for the third persona looked like:
Realistic image of 35 year old male military vet in plain clothes sitting at a laptop.
That was a revised prompt. Since the first attempt did not include “in plain clothes,” all the images were of a man in military uniform. Of course, I could have just left out the “military vet” altogether, but I wanted to see if this extra detail would have an impact on the image. Maybe the physical stature of this persona is reflective of the military vet detail, just as the glasses on the last persona are probably reflective of the “visual impairment” detail that I included. When creating these images, issues with eyes and hands seem to be especially prominent (see hands on lower third image), but it is so easy to re-generate that a user can get as picky as they want–at least within the usage limits of the free version anyway. I suppose there is value in giving a persona a visual image for the researcher to connect with, yet I don’t see this serving my own research very much. It did give me decent visuals to help spice up this post though!
 
Here is a link to my ChatGPT session (better formatting), or you can read the basic full text below:
 

UX Persona Creation and Application

Jason PalmerFebruary 23, 2024
 
Me:
Please help me by creating four different personas for user experience/user design research. The product they are using is ChatGPT. They are all college students. Two of them are traditional and two of them are non-traditional students. Two of them have disabilities that impact how they learn and use technology. They are all students at a diverse four-year college in suburban Atlanta. Provide other useful characteristics, details, and insights for each of these four different personas: things like what they might use the product for, how they might go about using it, prior experience, etc.
 
ChatGPT:

Sure, I can provide some hypothetical personas for ChatGPT users. However, please note that the following personas are fictional and are based on general assumptions and typical user behaviors, not on real individuals.

  1. Traditional Student – Natalie:
  • Age: 21
  • Gender: Female
  • Major: Psychology
  • Background: Grew up in a suburban area, comes from a middle-class family.
  • Disability: Dyslexia
  • Tech Comfort: Somewhat comfortable, uses a laptop and smartphone regularly.
  • ChatGPT Usage: Uses ChatGPT for research and writing assistance, especially for tasks that involve extensive reading or writing. Finds ChatGPT’s summarization and paraphrasing features helpful.
  • Experience: Natalie has used language assistance tools in the past and is looking for a more versatile tool that can help her with her specific learning needs. She appreciates AI tools that can understand natural language and provide accurate and concise responses.
  1. Non-Traditional Student – Jason:
  • Age: 35
  • Gender: Male
  • Major: Computer Science
  • Background: Military veteran, came back to college after serving.
  • Disability: PTSD
  • Tech Comfort: Very comfortable with technology, has experience with coding and software development.
  • ChatGPT Usage: Uses ChatGPT to assist with coding tasks, such as generating code snippets or checking for errors in his code. Also uses ChatGPT for brainstorming and project planning.
  • Experience: Jason is familiar with AI and machine learning technologies. He values efficiency and accuracy in the tools he uses. He appreciates ChatGPT’s ability to understand technical language and provide relevant information.
  1. Traditional Student – Sarah:
  • Age: 20
  • Gender: Female
  • Major: Journalism
  • Background: Grew up in a rural area, comes from a working-class family.
  • Disability: ADHD
  • Tech Comfort: Comfortable with technology, uses a laptop and smartphone regularly. Struggles with focus and organization.
  • ChatGPT Usage: Uses ChatGPT for brainstorming and organizing ideas for articles and assignments. Finds the tool’s ability to generate ideas and provide structure helpful.
  • Experience: Sarah has used other productivity tools in the past but has found them to be overwhelming or too rigid. She appreciates ChatGPT’s flexibility and ease of use.
  1. Non-Traditional Student – Michael:
  • Age: 29
  • Gender: Male
  • Major: Business Administration
  • Background: Grew up in a suburban area, comes from an upper-middle-class family.
  • Disability: Visual Impairment (Partially Blind)
  • Tech Comfort: Comfortable with technology, uses screen readers and other assistive technologies.
  • ChatGPT Usage: Uses ChatGPT for research and writing assistance, especially for tasks that involve extensive reading or writing. Finds ChatGPT’s summarization and paraphrasing features helpful.
  • Experience: Michael has used other language assistance tools in the past but has found them to be less accurate or accessible. He appreciates ChatGPT’s ability to understand natural language and provide accurate responses.
 
Me:
Now provide a summary for persona #1 (Natalie) that explains how the UX designer of an updated version of ChatGPT could apply the characteristics of the personas (especially the disability information) to an updated version of the ChatGPT interfaces (both web and mobile).
 
ChatGPT:

Based on the persona of Natalie, a UX designer for an updated version of ChatGPT could consider several key characteristics to enhance the accessibility and usability of the product for users with dyslexia:

  1. Readable Text: Ensure that the text on the interface is clear, easy to read, and uses dyslexia-friendly fonts such as OpenDyslexic or Dyslexie. The designer can also consider options for adjustable font size and contrast settings to accommodate individual preferences.
  2. Intuitive Navigation: Simplify the navigation of the interface by using clear icons and labeling, and avoid using complex or ambiguous language that might be difficult for users with dyslexia to understand.
  3. Assistive Technology Integration: Make sure that the interface is compatible with screen readers and other assistive technologies commonly used by individuals with dyslexia.
  4. Language Processing Features: Enhance the language processing capabilities of the ChatGPT to provide accurate and concise responses that are easier for users with dyslexia to understand. This might include features like summarization and paraphrasing, as well as the ability to break down complex information into simpler terms.
  5. Customizable Settings: Allow users to customize the interface according to their individual needs, such as adjusting the speed of text presentation or enabling visual cues to aid in comprehension.
  6. User Education and Support: Provide clear instructions and guidance on how to use the interface effectively, and offer accessible support resources such as tutorials or FAQs.

By taking these characteristics into account, the UX designer can create an updated version of ChatGPT that is more accessible and user-friendly for individuals with dyslexia like Natalie.

 
Me:
Now, taking into account all of the users’ usage characteristics, what specific changes to the ChatGPT interface and functionality should the UX designer consider? Please be as specific as possible, providing examples and explanations of the possible changes.
 
ChatGPT:

Certainly, let’s consider the specific changes the UX designer of ChatGPT might want to consider for each of the personas mentioned earlier:

  1. Natalie (Traditional Student with Dyslexia):
    • Interface Changes: Increase font size options, provide a dyslexia-friendly font option, offer adjustable contrast settings, and minimize visual clutter.
    • Functionality Changes: Introduce a summarization feature that condenses lengthy text responses into key points, offer an option to paraphrase complex or jargon-filled responses into simpler language, and provide a spell-check feature that highlights potential spelling errors.
    • Example: Instead of displaying a long paragraph in a single block of text, the interface could provide a summarized version with bullet points, using simpler language and shorter sentences.
  2. Jason (Non-Traditional Student with PTSD):
    • Interface Changes: Provide a dark mode option for users who are sensitive to bright screens, offer customizable notification settings, and implement a progress tracker for long tasks to help manage focus and anxiety.
    • Functionality Changes: Introduce a distraction-free mode that hides non-essential elements of the interface, incorporate a timer feature to encourage breaks during long sessions, and add a feature that allows users to save and resume tasks.
    • Example: The interface could include a “Focus Mode” that hides the chatbox and displays only the current task, with a timer and progress bar visible to help Jason track his work.
  3. Sarah (Traditional Student with ADHD):
    • Interface Changes: Implement a clear and hierarchical layout, offer color-coded categories for tasks, and provide a task list feature with the option to set reminders.
    • Functionality Changes: Introduce an auto-save feature to prevent data loss due to distraction, include a focus mode that temporarily disables notifications, and add a “Brainstorming Mode” that helps organize ideas into a structured outline.
    • Example: The interface could have a “Task Manager” feature that displays Sarah’s assignments and allows her to set deadlines and receive reminders, helping her stay organized and on track.
  4. Michael (Non-Traditional Student with Visual Impairment):
    • Interface Changes: Ensure the interface is compatible with screen readers and supports keyboard navigation, provide a high-contrast mode, and offer adjustable magnification options.
    • Functionality Changes: Implement a voice command feature that allows users to navigate the interface hands-free, offer an audio summary option for lengthy responses, and include keyboard shortcuts for common actions.
    • Example: The interface could have a “Voice Control” feature that lets Michael navigate the interface and access functions using voice commands, reducing the need for manual interaction.

By considering these specific changes tailored to the needs of each persona, the UX designer can create an updated version of ChatGPT that offers a more accessible and user-friendly experience for everyone, taking into account their usage characteristics and disabilities.

 
Me:
Let’s now assume that our different personas all have different cultural backgrounds. What considerations does this place on the UX designer?
 
ChatGPT:

When considering the cultural backgrounds of the personas using ChatGPT, the UX designer should take into account the following considerations:

  1. Language and Communication Styles: Different cultures have unique language and communication styles. For example, some cultures may prefer indirect communication, while others may value directness. The UX designer should ensure that ChatGPT’s language models are trained to understand and respond appropriately to these different styles.
  2. Visual Design and Color Schemes: Colors and visual design elements can have cultural significance. For example, certain colors may have positive or negative connotations depending on the culture. The UX designer should consider cultural sensitivities when choosing color schemes and visual design elements for the interface.
  3. User Preferences and Expectations: Different cultures may have different preferences and expectations when it comes to technology and user interfaces. For example, some cultures may prefer minimalist designs, while others may prefer more visually rich interfaces. The UX designer should conduct user research to understand the preferences and expectations of the personas from different cultural backgrounds.
  4. Content and Content Localization: Content on the ChatGPT interface should be culturally sensitive and inclusive. The UX designer should consider the cultural context of the content and ensure that it is appropriate and relevant to all users, regardless of their cultural background.
  5. Accessibility and Inclusivity: Cultural inclusivity goes hand-in-hand with accessibility and inclusivity. The UX designer should ensure that ChatGPT’s interface is accessible to users from different cultural backgrounds, including those with disabilities, by implementing features such as multilingual support, culturally diverse avatars, and user-friendly interfaces.

Overall, the UX designer should take a culturally sensitive and inclusive approach when designing ChatGPT’s interface, taking into account the unique needs, preferences, and expectations of the personas from different cultural backgrounds. This will help create a more inclusive and user-friendly experience for all users.

 
Me:
Let’s assign different cultural backgrounds to our personas.
 
ChatGPT:

Sure, let’s assign different cultural backgrounds to our personas:

  1. Traditional Student with Dyslexia – Natalie:
    • Cultural Background: Natalie comes from a culturally diverse family with Hispanic roots. She speaks both English and Spanish fluently.
  2. Non-Traditional Student with PTSD – Jason:
    • Cultural Background: Jason grew up in a small town in the Southern United States and has a strong sense of Southern culture and traditions.
  3. Traditional Student with ADHD – Sarah:
    • Cultural Background: Sarah comes from a multicultural background. Her mother is Chinese, and her father is African American. She is fluent in English and conversational in Mandarin.
  4. Non-Traditional Student with Visual Impairment – Michael:
    • Cultural Background: Michael is originally from the Philippines and moved to the United States for college. He is fluent in English and Tagalog, the national language of the Philippines.

With these diverse cultural backgrounds in mind, the UX designer should consider the specific cultural sensitivities, communication styles, and preferences of each persona when designing ChatGPT’s interface. This will help create a more inclusive and culturally sensitive experience for all users.

 
Me:
Now lets make some UX improvement suggestions for each of the personas based on cultural background.
 
ChatGPT:

Certainly! Here are some UX improvement suggestions for each of the personas based on their cultural backgrounds:

  1. Natalie (Hispanic Background):
    • UX Improvement: Offer a language option for Spanish-speaking users like Natalie. ChatGPT could have an option to switch between English and Spanish language modes, allowing users like Natalie to communicate more comfortably in their preferred language.
  2. Jason (Southern United States Background):
    • UX Improvement: Incorporate elements of Southern culture into the interface design, such as using warm, earthy tones and friendly, informal language. This could help Jason feel more at home while using the product.
  3. Sarah (Multicultural Background):
    • UX Improvement: Provide a culturally diverse set of avatars and user interface elements that reflect Sarah’s multicultural background. This could include avatars with different ethnicities and customizable design elements that allow users to express their unique cultural identities.
  4. Michael (Filipino Background):
    • UX Improvement: Integrate multilingual support for Tagalog-speaking users like Michael. ChatGPT could include an option to switch between English and Tagalog language modes, allowing users to communicate more comfortably in their preferred language.

These UX improvements take into account the unique cultural backgrounds of each persona and aim to create a more inclusive and culturally sensitive experience for all users.

 
Me:
Do you have any other suggestions for how a UX designer could use these personas to improve ChatGPT? Is there anything we may have missed in the creation of the personas that could lead to a better product and experience for the diverse range of ChatGPT users?
 
ChatGPT:

Certainly! Here are some additional suggestions for how a UX designer could use these personas to improve ChatGPT:

  1. Cultural Sensitivity Training for AI: To ensure that ChatGPT’s responses are culturally sensitive and appropriate, the UX designer could work with AI trainers to provide cultural sensitivity training to the language models. This could involve incorporating diverse cultural perspectives and values into the training data and ensuring that the language models understand and respect cultural differences.
  2. User Research and Feedback: The UX designer could conduct user research and gather feedback from users from diverse cultural backgrounds to understand their specific needs, preferences, and pain points. This could involve conducting focus groups, interviews, surveys, and usability testing with users from different cultural backgrounds.
  3. Localization and Translation: To make ChatGPT more accessible to users from different cultural backgrounds, the UX designer could work on localizing the interface and content for different languages and regions. This could involve translating the interface and content into multiple languages and adapting the design to suit the cultural norms and preferences of different regions.
  4. Cultural Contextualization: The UX designer could work on contextualizing ChatGPT’s responses to better reflect the cultural context of the user. For example, if a user asks a question about a cultural tradition or practice, ChatGPT could provide a response that takes into account the cultural context and significance of the tradition or practice.
  5. Inclusive Design and Accessibility: To ensure that ChatGPT is accessible to users from diverse cultural backgrounds, the UX designer could work on making the interface and content more inclusive and accessible. This could involve incorporating features such as text-to-speech, screen readers, and voice commands to make ChatGPT more accessible to users with disabilities.

By taking these suggestions into account, the UX designer can create a more inclusive and culturally sensitive experience for all users of ChatGPT, regardless of their cultural background.

 
Me:
Can you give me a good prompt to use for an image generator that could provide me with images of each of these personas? Include the relevant details the image generator would need.
 
ChatGPT:

Certainly! Here’s a good prompt for an image generator that could provide you with images of each of these personas:

Prompt: Generate a set of diverse avatar images representing the following personas:

  1. Natalie – A 21-year-old female college student majoring in Psychology. She is of Hispanic descent, has dyslexia, and speaks both English and Spanish fluently.
  2. Jason – A 35-year-old male college student majoring in Computer Science. He is a military veteran with PTSD and has a strong sense of Southern culture and traditions.
  3. Sarah – A 20-year-old female college student majoring in Journalism. She is of mixed Chinese and African American heritage, has ADHD, and is conversational in Mandarin.
  4. Michael – A 29-year-old male college student majoring in Business Administration. He is originally from the Philippines, has visual impairment, and is fluent in English and Tagalog.

Details Needed:

  • Gender: Male or Female
  • Age: 20-35
  • Ethnicity/Cultural Background: Hispanic (Natalie), Southern United States (Jason), Mixed Chinese and African American (Sarah), Filipino (Michael)
  • Disabilities: Dyslexia (Natalie), PTSD (Jason), ADHD (Sarah), Visual Impairment (Michael)
  • Major/Field of Study: Psychology (Natalie), Computer Science (Jason), Journalism (Sarah), Business Administration (Michael)
  • Languages Spoken: English, Spanish (Natalie), English (Jason), English, Mandarin (Sarah), English, Tagalog (Michael)

The image generator could use these details to create a diverse set of avatar images that accurately represent each of the personas.

 
Me:
Please abbreviate that significantly and ask for just one image showing them with handwritten name tags on their shirts.
 
ChatGPT:

Certainly, here’s a more concise version of the prompt:

Prompt: Generate one image of four avatars, each representing a college student with a handwritten name tag on their shirt. Each student should be visibly different, representing the following personas:

  1. Natalie: 21-year-old female with Hispanic background, dyslexia, Psychology major, speaks English and Spanish.
  2. Jason: 35-year-old male with Southern United States background, PTSD, Computer Science major, English speaker.
  3. Sarah: 20-year-old female with mixed Chinese and African American heritage, ADHD, Journalism major, speaks English and Mandarin.
  4. Michael: 29-year-old male from the Philippines, visual impairment, Business Administration major, speaks English and Tagalog.

Details Needed:

  • Gender: Male or Female
  • Age: 20-35
  • Ethnicity/Cultural Background: Hispanic (Natalie), Southern United States (Jason), Mixed Chinese and African American (Sarah), Filipino (Michael)
  • Disabilities: Dyslexia (Natalie), PTSD (Jason), ADHD (Sarah), Visual Impairment (Michael)
  • Major/Field of Study: Psychology (Natalie), Computer Science (Jason), Journalism (Sarah), Business Administration (Michael)
  • Languages Spoken: English, Spanish (Natalie), English (Jason), English, Mandarin (Sarah), English, Tagalog (Michael)

Please provide a single image showing all four avatars, each with a handwritten name tag on their shirt.

 

 

 

Thoughts on the Ethics of Offering Incentives for Surveys and Questionnaires

At the end of each semester, my students are provided with a link to a survey seeking their feedback regarding my performance as an instructor for their course. This is standard practice at many institutions and typically organized and/or administered at the direction of an institutional effectiveness office, not the instructors themselves. Although the students at my institution receive an email notification that their participation is needed/requested (and a reminder email or two), the emails include only a vague (meaningless?) incentive of institutional improvement connected to the request.

Because an educational institution should theoretically value higher levels of participation in end of course surveys, and because the level of my own students’ participation can be viewed (unfairly) as an indirect reflection of my professional capabilities, I have often wondered whether I should provide some type of incentive for my own students to complete their evaluation surveys. I am aware that some of my colleagues do provide extra credit for end of course survey completion, but this strikes me as somewhat unethical because a student (i.e. research participant) who is aware of a reward attached to their participation could then have a different outlook on the subject of the research or the research itself. In other words, rewards make people happy. Rewards change people’s moods and perceptions. Rewards, then, can introduce a type of subtle bias. This is why I have typically eschewed rewards to encourage participation in my end of course surveys. But this doesn’t necessarily mean that I have eliminated bias. Not at all.

When people (like students) come to expect rewards for completing surveys, researchers could potentially create a bias simply by not offering some kind of expected incentive for a person’s time and thoughts. If a person feels like they are being compelled to complete a survey without any promise of a reward, they may be unhappy/annoyed/frustrated about doing so, and those kinds of feelings could potentially filter into their survey responses—perhaps even unconsciously.

By not offering my own students rewards for their participation in my end of course surveys, I do fear that I am impacting their responses in an unintended way connected to their emotional state when completing the survey. This is made worse because of my colleagues who do offer incentives and thereby create a stronger expectation on the part of students, an expectation that is naturally followed with disappointment when not met.

Instead of being “damned if I do, damned if I don’t,” I am more “favored if I do, damned if I don’t” when it comes to providing incentives for survey participation. But either way, I would argue that the data is damned.

If the principles of offering (and refraining from) survey incentives which I have laid out for my own course surveys hold generally true for all research efforts that include voluntary data collection, then it seems the researcher concerned with participant bias is faced with a no-win situation: provide a reward and introduce a favorable response bias -or- refrain from providing a reward and introduce a potential unfavorable response bias.

This matter is further complicated by factors such as socioeconomic status since those who are more in need of incentives may be more susceptible to bias (in my case, students with lower grades and more in need of extra credit would be impacted differently by rewards than those who do not need them). A 2019 study supports the idea that “There is also a risk that incentives may introduce bias, by being more appealing to those with lower socioeconomic status.” Although that particular article is concerned with bias in terms of the sample generated, the connection between incentives and participants’ attitudes/responses should be a matter of concern as well.

So what is an ethical researcher to do?

At Georgia State University, where I am a graduate student, the course evaluation process doesn’t exactly provide an incentive for students who complete instructor evaluations. Instead, they withhold a student’s ability to view their final grades until they cooperate and submit their reviews.

This doesn’t seem like the most ethical way of solving the participation problem, but I do imagine it is effective at generating high response rates without costing the institution a nickel. A better approach, I think, is working harder to educate students (research participants) about the risks involved in providing incentives and punishments when trying to encourage participation in research. This at least gives students some answers as to why an incentive is not being offered and hopefully garners more credibility for the study in the eyes of the people who are being asked to participate. I have taken to this approach with my course evaluations, and I think it may be somewhat effective at preventing the bias I am seeking to eliminate. It is not, however, the most effective way to increase response rates.

 

Implications for my UX research

There is probably no perfect system for achieving optimal survey participation while also eliminating bias, but researchers can at least be thoughtful and careful about their efforts. Although I have not designed my final UX course project yet, I can imagine the need for survey responses to inform my efforts. Perhaps I will simply try to encourage participation by explaining the virtue of my educational field work and also explaining the problems associated with offering incentives. Perhaps I will just offer incentives to participants and carefully explain this choice within my report. Another option might be to try to diffuse the potential bias created by an incentive by adding a question at the beginning of the questionnaire like the following:

“Could providing incentives to participate in surveys make respondents less honest (i.e. more positive) in their answers?”

Such a question would at least force the participants to consider the issue right before they answer other questions where their unbiased feedback is sought. I can see how creating that kind of basic awareness could encourage less biased, more honest responses.

 

Other readings and notes

Most sources I found related to this issue deal more with introducing bias in the sample. I didn’t find any that specifically deal with the quality of responses being impacted by incentives.

This article found that incentives provided upfront work better than incentives provided after participation. I would have expected the opposite, and I suppose that is me not believing the best in people.

“Prepaid incentives (unconditional incentives) are more effective in increasing response rates in comparison to payment after survey completion (conditional incentives) [6465]. This could be explained by social exchange theory as providing participants monetary incentives in advance encourages them because they feel they should reciprocate for the reward they receive by completing the survey.”

This is decent write up of different types of bias in surveys and surveying.

ChatGPT and claude.ai reflect mostly what I found (or didn’t find) when looking into this issue, but when pressed a bit, ChatGPT gave me something closer to what I was looking for in terms of response bias. Of course, just because it gave me what I was after, I wouldn’t trust it without further verification from some better human sources. (These generative AI systems are no solution for confirmation bias, and I do fear they might make that even problem worse.)

UX Research Notes: Using Screen Recording to Capture ChatGPT Usage

On a Friday morning I recruited two family members to participate in a brief research activity using ChatGPT. I used screen recording through Microsoft Teams to capture the participants’ interactions with the free version of OpenAI’s ChatGPT tool (ChatGPT 3.5). My goal as the researcher was exploratory in nature, seeking some feedback on the interface and functionality of the ChatGPT desktop web application.

Instead of simply providing the participants access to the ChatGPT website without any defined purpose for interacting with it, I designed an activity to encourage them to make use of the tool. The instructions for the activity were as follows:

Instructions: Use the generative-AI tool (ChatGPT or claude.ai) to engage in a virtual debate. You can ask the AI tool to be your debate partner and for suggestions on a topic, or you may suggest a topic. The goal is to “win” the debate.

*Note: topics related to recent current events don’t work well because the generative AI tools do not have access to information after 2021 (ChatGPT) and 2022 (claude.ai). 

Both users were given this same information on a sheet of paper and verbally informed of the purpose for the activity. They were not given time parameters for completing the activity, and were instead told to take as much time as they needed to complete the activity, or they could stop when they were bored or felt like they could no longer make progress toward their goal.

The two participants were significantly different in terms of demographics. This seemed desirable because younger and older tech users sometimes have different approaches and attitudes toward interacting with technology. Here are the participant profiles:

Participant 1 (P1): 10-13 years old, white male, from a diverse suburb of Atlanta, public school educated, never used ChatGPT or other generative-AI platforms.

Participant 2 (P2): 35-40 years old, white female, from a diverse suburb of Atlanta, college educated, very limited past use of ChatGPT—maybe 2 or 3 times total for work and curiosity reasons.

Results Summary

P1 engaged with ChatGPT for over 20 minutes, exploring different topics he proposed, then asking for a suggestion for a topic from ChatGPT which he debated, and finally proposing one last topic himself which he also debated.

  • Interactions by the numbers: 14 typed prompts/replies, average length 20 words.

P2 engaged with ChatGPT for 10 minutes, focusing on only one topic which she proposed.

  • Interactions by the numbers: 9 typed prompts/replies, average length 13 words.

UX Observations

Both participants found the system to be intuitive and easy to use.

Neither participant made use of the small icons below each ChatGPT response (thumbs down, regenerate, copy). When asked why, participant 2 said she never uses those features (especially the “like/dislike” buttons) because she doesn’t care to inform the algorithms used for determining the types of content she is fed on social media sites.

Both participants commented that the speed at which ChatGPT responses displayed and auto-scrolled was too fast. From Participant 1:

“I would have the I would have chat GPT pull up its arguments slower because it feels like a lot to read it. The pace felt really, really fast. It was almost like stressful to read them. And I’m really fast reader, and I still couldn’t read it all.”

Although I did not want to interrupt the session, I was curious to know if participant 1 was reading the full responses generated by ChatGPT, so I asked him in the middle of his session. He replied that he was just “skimming” them. The fact that he chose to skim was likely because the responses were autos-scrolling quickly and somewhat lengthy compared to the short prompts/replies he was typing.

Both participants also said they liked the disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Consider checking important information.” But both participants did not notice this until the end of their session. Neither was sure if it was there at the beginning and participant 2 suggested it ought to be a pop-up at the beginning that must be X’ed out by the user.

Participant 1 suggested custom font options, especially in terms of size for readability.

Participant 1 also suggested the ability to input prompts/replies with speech and an option to have ChatGPT’s responses read out loud to the user.

Final Thoughts

Using the Teams screen recording option was easy, and the transcript proved useful despite its inaccuracies. I would use Teams in the future for similar data collection and the recording of post-use interviews.

In the future, I would encourage participants to verbalize more of their thoughts during the use of the product. I was able to get some good feedback after my participants completed their tasks, but more in-the-moment feedback would give me more material to work with in a post-use interview.

Week 3 Field Notes

The Five Second Test (for sample To-Do List)

  1. What was the app for? Looks like it could be a search feature of some sort.
  2. Did you want to use it? Yes, even though I did not know exactly what it was, the simplicity of the design was inviting—probably because it looked to me like I wouldn’t screw anything up by messing around with it.
  3. If so, what would you do first? I would probably type “test” in the text field and click the button to see what happens.

 

  1. Note anything relevant that occurred to you during our discussion.

Students seemed to have some similar thoughts and reactions to the tool. As we discussed the artifact, I was reminded of the many focus group studies I participated in in the 2010s as I sought to supplement my income and pay off student loans. I learned so much about market research and facilitating focus groups from these.

  1. Note your thoughts about 5 second testing as a Usability tool.

The 5 second test is useful because we know that many prospective users/buyers are going to make decisions based on very initial impressions of a product. The book gets judged by its cover whether we like it or not. Therefore, seeking the first thoughts one has provides the designer the chance to better evaluate the product and make necessary changes. With enough data from a strong enough sample of 5 second users, the designer should be able to figure out which elements are most appealing to the largest number of potential users.

Heuristic analysis

  1. visibility of system status — keep users informed about behind the screen processing, loading, successfully uploaded, searching please wait

The importance of status visibility depends on the artifact, but the spinning wheel of death, as it is sometimes called, is one way software/web tools let us know something is going on and we (as users) do not need to do anything else—yet. I do think the visuals that show percentages of status progress are far more helpful than those that do not.

  1. use familiar, real world, language — no jargon, no site-specific lingo

Again, this will depend on the artifact and audience, but in the case of a basic to do list meant for the general public, this is solid advice. They can’t use it easily if they don’t understand it.

  1. users should be in control — nothing relevant to the experience should be happening behind the screen

This hints at transparency and data security too. We can have users forfeit control (of their data and usage behavior) by including a long terms and conditions page (that they won’t read) and have them mindlessly click “agree” so the artifact can take control of whatever benefits the designer. I do not think this kind of unethical strategy should be employed, but it does feel more like the norm today.

  1. follow industry standards — CNTRL S means save regardless of platform or Cmd because Apple

It is also important to keep up with changing standards then. People (young ones, especially) are always tweaking the way technology is used, and if a designer only sticks to the standards of their time, they will be missing opportunities to improve their products usability.

  1. don’t let users make mistakes — multiple levels of undo, popup in place warnings about required form fields, greyed out representation of features unavailable in the current context

This means that the designer needs as much usage data as possible to see what mistakes can be made. This is hard to predict. Just like how people can find creative ways to use a product that were completely unexpected, they can also find an infinite number of ways to misuse a product so that it doesn’t work at all.

  1. recognition over recall — don’t make users remember or have to think

The whole point of technology is to make tasks easier. The harder the user has to work or think, the less need they are going to have for that particular tech.

  1. flexible designs — experts should have shortcuts and other tools that aren’t visible to novices who will be distracted or confused by them

Interesting concept. Not sure I agree completely, but it does make sense to at least let users know in some logical way that advanced features are available to more expert users. We see this many times indicated with an “advanced” button that reveals those features. That seems like a good application of this design principle.

  1. minimalist design — don’t clutter the screen, don’t add images as decoration

Yes! Yes! Yes! I can’t imagine anyone loves their screens cluttered with ads and things they don’t need. This should be #1 on the list. Only add what is needed. As for not adding images for decoration, I might argue that some images that appear to be decoration are really serving some important other purpose as well, like appealing to emotion. A picture of a tropical island might have the effect of calming the user, and paired with music (that some would initially think is superfluous) could actually have a positive effect on the user’s experience.

  1. no error should be fatal — offer clear signposts and ways to start over efficiently, auto populate form data when possible

I have not put much thought into this yet, but I can see how one fatal error would lose a user for good. As for auto-populating, that is helpful most of the time. I always appreciate when “United States” is listed at the top of a dropdown that is otherwise alphabetical. I hope for users in other countries, the designers make their country rise to the top based on ip address or something—especially for our friends in Zambia.

  1. provide help — but design so no one needs it (and assume no one will read it)

But what help will the user need? How many avenues of help are enough? Chatbot features can sometimes be helpful, but they have a history of being frustrating too. Not to mention they can clutter a screen. I still appreciate a live chat with a human assistant, but those are not always practical. FAQ pages are hit or miss; the most thorough can be too long to work through, and the more basic will miss too many common issues. User input is essential to determine where the help should be provided.

UX Project: Fully Functional Demo Website Coded by ChatGPT

Demo website created with ChatGPT; click image to view working demo.

I have never written a line of computer code in my life. I don’t know the first thing about the differences between HTML and Java, or any other software languages. But harnessing the power of ChatGPT today, I created the website pictured above in less than an hour. Mind you, this was not some drag and drop, customizable website template process like many website builders offer (Wix and WordPress, for example). I have used those without breaking a sweat before (this WordPress site you are reading is an example), but those are not coding efforts. What the free version of ChatGPT was able to create for me was functional HTML code generated completely from scratch (or rather, translated and generated from my written English language prompts). I then pasted the code into the free web “space” offered by W3Schools and voila! Operational website.

The UX class assignment I was working on gave me the option to create a mockup of a to-do list application/website (think basic non-operational image) or go a step further and play around with GenAI to see if I could create a working model. Despite my fascination with ChatGPT, and my frequent use and exploration of it over the past year as an educational tool, I was hesitant to use it for coding purposes. I was intimidated. A quick demo at the end our online class meeting changed my posture though (thanks Franco and Dr. Pullman!). 

Here is a summary of how I created my online to-do list website.

  1. Set up account at W3Schools (<5 minutes).
  2. Went to ChatGPT and prompted it as follows: “Please create a web page in html that includes a to-do list feature that allows the user to add items to one of three catagories, each of which has a column on the page. Name the three categories home, teaching, and scholarship. Provide me the html code.” (<5 minutes)  *I still have no clue what HTML code is or how it works. I only asked it to use HTML because I thought I heard someone say it is the easiest or most basic code to work with.
  3. Copy the code generated by ChatGPT, and then go back to W3Schools, create “New Root File” and paste the code into the the text box. (2 minutes) *This was the only tricky step for me, and it was the step I was scared of before seeing my classmate do it. The user interface of W3Schools might make sense to people who know a little bit about coding and web design, but to me, a complete coding novice, it was not intuitive.
  4. Click “Run.”
  5. Preview the newly born website.
  6. Go back to ChatGPT, ask it to make specific changes and rewrite the entire code. Then go to W3Schools, delete the root file and replace with the new code. (40 minutes) *I did this probably six or seven times, requesting one or two changes each time, and then testing it out. 

          Here is a link to the entire conversation I had with ChatGPT if you want to see all of the prompts I              gave it. Note: I do communicate with the machine as if it is a human assistant/tutor/student, so I              typically use pleases, thank yous, etc. I do this mostly because I don’t want to develop less polite                  communication habits, not because I think of the machine as a person–but I don’t claim to be                    completely immune to the ELIZA effect either.

Key Takeaway

When I found out last year that ChatGPT could write functional code when prompted by English text, I had a hunch that English (or any common spoken/written human language) had just become the new computer language. But I then read somewhere (wish I could remember where I saw it) that English could never really replace the learned skill of computer coding because the English language is too ambiguous, vague, and complex to work on the logic level of a computer. That sounded plausible, so I shelfed my hypothesis hunch and left it for another day. Well, today is that day. I am now convinced that the “skill” of learning, understanding, and writing computer code can be almost completely replaced by the combination of GenAI and effective English prompt engineering. I do understand that there are still critical reasons why some people need to understand and practice coding, but the number of those people will drop precipitously now as non-techies like me partner up with GenAI to do what most of us never dreamed we could do in a digital space–at least not without hours upon hours of instruction in a host of computer languages. The new computer language is indeed English: specifically, concise English delivered repeatedly and patiently to a non-human audience with superhuman coding abilities. Whoever said English wouldn’t replace coding languages because it is too ambiguous, vague, etc. failed to realize that users of English can be insightful, creative, and persistent enough to overcome those inherent problems with the language. They also failed to realize that generative AI is (often, but not always) incredibly competent at making sense out of human language inputs. 

UX Takeaways

ChatGPT and other GenAI make the designer’s job way easier, but only if that designer can communicate properly with the machine and/or have some patience and recognize that designing with AI as co-pilot is an iterative process.

A designer still needs a solid plan and then decent feedback to make appropriate improvements. AI can provide both if prompted correctly, but human feedback should always be sought to help the designer catch their own blind spots.

Examining Some UX Case Studies and Creating a Case Study for ChatGPT, with ChatGPT

After looking at a handful of UX case studies, it appears that some combination of the following components comprise an effective study: 

Design Statement

Needs Assessment/Interviews with prospective users

Functionality requirements and constraints

Benchmarks (competitors)/Competitive Analysis

Personas

Storyboards/Wireframing

Task flows

Prototype drawings

Example of user experience/Test uses and findings

But some studies do a better job than others at grouping these components under broader headings that make the project easier to follow sequentially and thematically. The airbnb study described the process in these five steps: Explore, Define, Ideate, Develop & Learn. This seems to capture the generalities of a bunch of the elements that case studies have in common. The “Problem, Solution, Revenue” model seems like another good way to organize a case study–although I find it interesting that the revenue portion comes last. I would assume most “successful” developers allow the revenue stream to dictate the solution–for better or worse.

One case study that caught my attention is the Wikipedia study. I found this study especially interesting because, along with millions of other people, I am very familiar with the iconic interface and even appreciate the simplicity of its design. I couldn’t imagine it any other way–but many people more insightful and imaginative than I am have been making changes to the design of Wikipedia sites for years (there are actually hundreds different versions of Wikipedia).  One of the key takeaways from this study is the idea that fully functional prototypes “(versus mockups, or even a click-through prototypes) communicate designs significantly more clearly, and facilitate higher quality discussions and feedback.” Of course this comes at a steeper investment of time and resources, but the lesson should push UX researchers to strive beyond basic sketches and mockups. This Wikipedia case study is less complete than other examples, but it is worth a look because the components that are discussed are in-depth and relatable for anyone who has used Wikipedia extensively.

UX Case Study Designed by ChatGPT

I asked ChatGPT to design a UX case study (with itself as the subject). Here is the specific prompt I gave:

Me: Using a “Problem, Solution, Revenue” model, please create a UX case study for the ChatGPT website. (1/12/2024)

ChatGPT:

Title: Enhancing User Experience on ChatGPT Website

Problem: The ChatGPT website currently faces several user experience challenges, hindering optimal user engagement and satisfaction. Users often find it difficult to navigate through the website, locate relevant information, and understand the full range of capabilities offered by ChatGPT. Additionally, there is a lack of interactive elements that could enhance user interaction and make the website more user-friendly.

Solution: To address these challenges, we propose a comprehensive redesign of the ChatGPT website, focusing on improving navigation, providing clear information, and introducing interactive elements. The redesign will follow a user-centric approach, ensuring that visitors can easily access the information they need and understand the capabilities of ChatGPT.

  • Redesigned Homepage:
    • Clear and concise messaging to explain ChatGPT’s capabilities.
    • Streamlined navigation to guide users to different sections.
  • Interactive Demonstration:
    • A live demo section allowing users to interact with ChatGPT in real-time.
    • Clear instructions and examples to showcase the model’s capabilities.
  • Information Hierarchy:
    • Organize information in a clear hierarchy, making it easy for users to find what they’re looking for.
    • Use visual cues and concise text to communicate key points.
  • User Onboarding:
    • Implement a user-friendly onboarding process to guide new users through ChatGPT’s features.
    • Provide tooltips and hints to help users navigate and understand the platform.
  • Responsive Design:
    • Ensure the website is fully responsive across various devices, providing a seamless experience on desktop, tablets, and mobile devices.

Revenue: The improved user experience is expected to lead to increased user engagement, longer session durations, and higher user satisfaction. This, in turn, can contribute to higher conversion rates for premium features and services offered by ChatGPT. By attracting and retaining more users through an enhanced website experience, we anticipate a positive impact on overall revenue generation.

Outcome: The redesigned ChatGPT website, with a focus on improved user experience, is expected to address the current challenges and create a more inviting and interactive platform. Users will have a clearer understanding of ChatGPT’s capabilities, leading to increased user satisfaction and potential revenue growth. Continuous monitoring and user feedback will be crucial to refining the website further and ensuring an ongoing positive user experience.

###

To start, I think this outline is far too vague to be very helpful. Maybe a smart UX designer could work with the initial problem analysis in the outline and make a bunch of personal judgements about specifics, but the examples I looked at were far more specific in identifying needs and problems. I shouldn’t be too surprised that the machine didn’t produce something more specifically critical about itself, but I did have higher hopes. Of course, there is always the chance that my own prompt wasn’t specific enough to start with.

What is especially interesting about the outline created by ChatGPT is how it suggests a revenue strategy that relies on increased user engagement. On its face, this seems at least a little specific, but I am also willing to bet the machine is just pulling a probability here  from its training data related to how tech companies generally drive their income, as opposed to evaluating or at least balancing the best ethical choice for the end-users.  The maximum-engagement model runs completely counter to what Sam Altman (CEO of OpenAI, ChatGPT’s parent company) said in congressional testimony last year. Altman, responding to concerns about tech addiction issues, claimed that “we try to design systems that do not maximize engagement.” His reasoning was that they lacked sufficient computing resources to even support more engagement. But like many (most?) other tech companies, OpenAI is now embracing the increased engagement model to drive revenue. This was even specifically mentioned on their recently released ChatGPT store. So the fact that ChatGPT included this typical revenue model in my request for a UX case study is not altogether surprising, but it does go directly against the previously stated claims of the company’s CEO. 

This idea is important for UX studies because designers have competing factors that drive their designs. Two major factors that can find themselves in conflict are the maximizing of revenue and the maximizing of benefit the product brings to the user. Very often these two factors can overlap, and that is best case scenario for all. When a product is tweaked or designed in way that makes it both cheaper to produce and safer or more enjoyable for the consumer, that is ideal. But when a company prioritizes profit over the user’s best interests, we can end up with tech products that cause problems in a user’s life that could outweigh the benefits. Screen addiction for kids and negative mental health outcomes for users are two potential downsides that ChatGPT initially ignores. This seems incredibly important for UX designers to keep in mind as they go about their research, planning, and execution.

Here my second prompt and response from ChatGPT. I was after a better (more specific and more user-focused revenue strategy) UX case study:

Me: Can you re-do that and be far more specific about the problems, and then instead of simply suggesting a revenue stream based on maximizing engagement, figure out a strategy for a revenue stream that protects the users from the harms and negative outcomes that could come from overuse and overreliance on tech products like ChatGPT.

ChatGPT: 

Title: Mitigating User Risks: A User-Centric Redesign for ChatGPT Website

Problem:

  1. Information Overload:
    • Users struggle to grasp the full scope and capabilities of ChatGPT due to a lack of clear and concise information on the website.
    • Ambiguity around the model’s limitations and potential risks leads to confusion.
  2. Navigational Challenges:
    • The current website structure lacks a logical flow, making it difficult for users to navigate and find relevant information.
    • Users often abandon the site due to frustration with the disorganized layout.
  3. Limited User Guidance:
    • New users lack sufficient guidance on how to interact with ChatGPT, resulting in a suboptimal user experience.
    • Ambiguous onboarding contributes to users feeling overwhelmed and disoriented.

Solution:

  1. Detailed Information Architecture:
    • Create a dedicated section that clearly outlines ChatGPT’s capabilities, limitations, and ethical considerations.
    • Provide examples of appropriate use cases and communicate the importance of responsible AI use.
  2. Streamlined Navigation:
    • Implement a user-friendly navigation system with intuitive labels and a clear hierarchy to improve the overall website structure.
    • Prioritize key sections such as documentation, use cases, and ethical guidelines.
  3. Interactive Onboarding:
    • Develop an interactive onboarding process with step-by-step guidance on using ChatGPT responsibly.
    • Include educational elements to inform users about potential risks and the importance of ethical usage.
  4. User Education Hub:
    • Establish an educational hub containing articles, FAQs, and video content to help users understand the technology better.
    • Address common misconceptions and concerns to foster transparency.

Revenue Strategy: Instead of maximizing engagement without bounds, we propose a revenue strategy that prioritizes user well-being and safeguards against overuse:

  1. Usage Tiers:
    • Introduce tiered pricing plans with usage limits to encourage responsible usage.
    • Offer educational resources to users who approach or exceed their usage limits, promoting awareness of potential negative consequences.
  2. Ethical AI Certification:
    • Create a certification program for businesses and developers, emphasizing responsible and ethical AI usage.
    • Generate revenue through certification fees, with a portion allocated to ongoing user education initiatives.
  3. Premium Support for Ethical Use:
    • Offer premium support plans specifically tailored to guide users in ensuring ethical and responsible AI practices.
    • Revenue generated from premium support can be reinvested into user education and awareness programs.

Outcome: The redesigned ChatGPT website, focusing on user education and responsible usage, aims to mitigate potential risks associated with overuse and overreliance on AI technology. By implementing a revenue strategy that aligns with ethical principles, ChatGPT can foster a community of users who prioritize responsible AI practices, creating a positive impact on both user well-being and the product’s long-term sustainability. Continuous user feedback and iterative improvements will be crucial in refining the approach over time.

###

This is a marked improvement here which makes me think that much of the time when someone finds ChatGPT to be creating poor boilerplate writing, it could very well be due to their choice to accept the first thing the machine spits out. Better prompts create better responses, and a two-step process with a call for specific improvements to a first response yielded much better results for this meta-UX case study outline.

The altered revenue section is intriguing and maybe impressive because a tiered revenue/access model could be an actionable measurable strategy for the company to pursue. This could, at least in theory, have the users’ better interests in mind while still providing a revenue stream that OpenAI could adjust with their own setting of prices. Maybe we shouldn’t be surprised that when we work a little harder on our own instructions and communication with the machine, giving it opportunities to improve like we would any student, we end up with better and better iterations of the original response. Maybe this is what fascinates me most about generative AI: the ability I have to treat it both as a tutor and student.