Thoughts on the Ethics of Offering Incentives for Surveys and Questionnaires

At the end of each semester, my students are provided with a link to a survey seeking their feedback regarding my performance as an instructor for their course. This is standard practice at many institutions and typically organized and/or administered at the direction of an institutional effectiveness office, not the instructors themselves. Although the students at my institution receive an email notification that their participation is needed/requested (and a reminder email or two), the emails include only a vague (meaningless?) incentive of institutional improvement connected to the request.

Because an educational institution should theoretically value higher levels of participation in end of course surveys, and because the level of my own students’ participation can be viewed (unfairly) as an indirect reflection of my professional capabilities, I have often wondered whether I should provide some type of incentive for my own students to complete their evaluation surveys. I am aware that some of my colleagues do provide extra credit for end of course survey completion, but this strikes me as somewhat unethical because a student (i.e. research participant) who is aware of a reward attached to their participation could then have a different outlook on the subject of the research or the research itself. In other words, rewards make people happy. Rewards change people’s moods and perceptions. Rewards, then, can introduce a type of subtle bias. This is why I have typically eschewed rewards to encourage participation in my end of course surveys. But this doesn’t necessarily mean that I have eliminated bias. Not at all.

When people (like students) come to expect rewards for completing surveys, researchers could potentially create a bias simply by not offering some kind of expected incentive for a person’s time and thoughts. If a person feels like they are being compelled to complete a survey without any promise of a reward, they may be unhappy/annoyed/frustrated about doing so, and those kinds of feelings could potentially filter into their survey responses—perhaps even unconsciously.

By not offering my own students rewards for their participation in my end of course surveys, I do fear that I am impacting their responses in an unintended way connected to their emotional state when completing the survey. This is made worse because of my colleagues who do offer incentives and thereby create a stronger expectation on the part of students, an expectation that is naturally followed with disappointment when not met.

Instead of being “damned if I do, damned if I don’t,” I am more “favored if I do, damned if I don’t” when it comes to providing incentives for survey participation. But either way, I would argue that the data is damned.

If the principles of offering (and refraining from) survey incentives which I have laid out for my own course surveys hold generally true for all research efforts that include voluntary data collection, then it seems the researcher concerned with participant bias is faced with a no-win situation: provide a reward and introduce a favorable response bias -or- refrain from providing a reward and introduce a potential unfavorable response bias.

This matter is further complicated by factors such as socioeconomic status since those who are more in need of incentives may be more susceptible to bias (in my case, students with lower grades and more in need of extra credit would be impacted differently by rewards than those who do not need them). A 2019 study supports the idea that “There is also a risk that incentives may introduce bias, by being more appealing to those with lower socioeconomic status.” Although that particular article is concerned with bias in terms of the sample generated, the connection between incentives and participants’ attitudes/responses should be a matter of concern as well.

So what is an ethical researcher to do?

At Georgia State University, where I am a graduate student, the course evaluation process doesn’t exactly provide an incentive for students who complete instructor evaluations. Instead, they withhold a student’s ability to view their final grades until they cooperate and submit their reviews.

This doesn’t seem like the most ethical way of solving the participation problem, but I do imagine it is effective at generating high response rates without costing the institution a nickel. A better approach, I think, is working harder to educate students (research participants) about the risks involved in providing incentives and punishments when trying to encourage participation in research. This at least gives students some answers as to why an incentive is not being offered and hopefully garners more credibility for the study in the eyes of the people who are being asked to participate. I have taken to this approach with my course evaluations, and I think it may be somewhat effective at preventing the bias I am seeking to eliminate. It is not, however, the most effective way to increase response rates.

 

Implications for my UX research

There is probably no perfect system for achieving optimal survey participation while also eliminating bias, but researchers can at least be thoughtful and careful about their efforts. Although I have not designed my final UX course project yet, I can imagine the need for survey responses to inform my efforts. Perhaps I will simply try to encourage participation by explaining the virtue of my educational field work and also explaining the problems associated with offering incentives. Perhaps I will just offer incentives to participants and carefully explain this choice within my report. Another option might be to try to diffuse the potential bias created by an incentive by adding a question at the beginning of the questionnaire like the following:

“Could providing incentives to participate in surveys make respondents less honest (i.e. more positive) in their answers?”

Such a question would at least force the participants to consider the issue right before they answer other questions where their unbiased feedback is sought. I can see how creating that kind of basic awareness could encourage less biased, more honest responses.

 

Other readings and notes

Most sources I found related to this issue deal more with introducing bias in the sample. I didn’t find any that specifically deal with the quality of responses being impacted by incentives.

This article found that incentives provided upfront work better than incentives provided after participation. I would have expected the opposite, and I suppose that is me not believing the best in people.

“Prepaid incentives (unconditional incentives) are more effective in increasing response rates in comparison to payment after survey completion (conditional incentives) [6465]. This could be explained by social exchange theory as providing participants monetary incentives in advance encourages them because they feel they should reciprocate for the reward they receive by completing the survey.”

This is decent write up of different types of bias in surveys and surveying.

ChatGPT and claude.ai reflect mostly what I found (or didn’t find) when looking into this issue, but when pressed a bit, ChatGPT gave me something closer to what I was looking for in terms of response bias. Of course, just because it gave me what I was after, I wouldn’t trust it without further verification from some better human sources. (These generative AI systems are no solution for confirmation bias, and I do fear they might make that even problem worse.)

UX Research Notes: Using Screen Recording to Capture ChatGPT Usage

On a Friday morning I recruited two family members to participate in a brief research activity using ChatGPT. I used screen recording through Microsoft Teams to capture the participants’ interactions with the free version of OpenAI’s ChatGPT tool (ChatGPT 3.5). My goal as the researcher was exploratory in nature, seeking some feedback on the interface and functionality of the ChatGPT desktop web application.

Instead of simply providing the participants access to the ChatGPT website without any defined purpose for interacting with it, I designed an activity to encourage them to make use of the tool. The instructions for the activity were as follows:

Instructions: Use the generative-AI tool (ChatGPT or claude.ai) to engage in a virtual debate. You can ask the AI tool to be your debate partner and for suggestions on a topic, or you may suggest a topic. The goal is to “win” the debate.

*Note: topics related to recent current events don’t work well because the generative AI tools do not have access to information after 2021 (ChatGPT) and 2022 (claude.ai). 

Both users were given this same information on a sheet of paper and verbally informed of the purpose for the activity. They were not given time parameters for completing the activity, and were instead told to take as much time as they needed to complete the activity, or they could stop when they were bored or felt like they could no longer make progress toward their goal.

The two participants were significantly different in terms of demographics. This seemed desirable because younger and older tech users sometimes have different approaches and attitudes toward interacting with technology. Here are the participant profiles:

Participant 1 (P1): 10-13 years old, white male, from a diverse suburb of Atlanta, public school educated, never used ChatGPT or other generative-AI platforms.

Participant 2 (P2): 35-40 years old, white female, from a diverse suburb of Atlanta, college educated, very limited past use of ChatGPT—maybe 2 or 3 times total for work and curiosity reasons.

Results Summary

P1 engaged with ChatGPT for over 20 minutes, exploring different topics he proposed, then asking for a suggestion for a topic from ChatGPT which he debated, and finally proposing one last topic himself which he also debated.

  • Interactions by the numbers: 14 typed prompts/replies, average length 20 words.

P2 engaged with ChatGPT for 10 minutes, focusing on only one topic which she proposed.

  • Interactions by the numbers: 9 typed prompts/replies, average length 13 words.

UX Observations

Both participants found the system to be intuitive and easy to use.

Neither participant made use of the small icons below each ChatGPT response (thumbs down, regenerate, copy). When asked why, participant 2 said she never uses those features (especially the “like/dislike” buttons) because she doesn’t care to inform the algorithms used for determining the types of content she is fed on social media sites.

Both participants commented that the speed at which ChatGPT responses displayed and auto-scrolled was too fast. From Participant 1:

“I would have the I would have chat GPT pull up its arguments slower because it feels like a lot to read it. The pace felt really, really fast. It was almost like stressful to read them. And I’m really fast reader, and I still couldn’t read it all.”

Although I did not want to interrupt the session, I was curious to know if participant 1 was reading the full responses generated by ChatGPT, so I asked him in the middle of his session. He replied that he was just “skimming” them. The fact that he chose to skim was likely because the responses were autos-scrolling quickly and somewhat lengthy compared to the short prompts/replies he was typing.

Both participants also said they liked the disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Consider checking important information.” But both participants did not notice this until the end of their session. Neither was sure if it was there at the beginning and participant 2 suggested it ought to be a pop-up at the beginning that must be X’ed out by the user.

Participant 1 suggested custom font options, especially in terms of size for readability.

Participant 1 also suggested the ability to input prompts/replies with speech and an option to have ChatGPT’s responses read out loud to the user.

Final Thoughts

Using the Teams screen recording option was easy, and the transcript proved useful despite its inaccuracies. I would use Teams in the future for similar data collection and the recording of post-use interviews.

In the future, I would encourage participants to verbalize more of their thoughts during the use of the product. I was able to get some good feedback after my participants completed their tasks, but more in-the-moment feedback would give me more material to work with in a post-use interview.

Using MS Teams Screen Recording for UX Research

When conducting UX research on digital artifacts, like websites or mobile apps, UX designers can benefit greatly from seeing how a user interacts with the product in real time. Researchers/designers rely on data to make decisions about how to improve a product, and a complete look at a user’s interactions can provide insightful data points. In the past, designers who wanted this kind of data may have been limited to looking over a user’s shoulder as they interacted with a product, or they may have relied on post-use reports in text formats which, although helpful, would have been lacking any kind of “soft” data that can be gleaned from observing a user in action.

Today, there is no shortage of tools for UX researchers to choose from when it comes to screen recording. The choices are overwhelming and even finding reliable comparisons is tricky as so many of the reviews are sponsored by the companies who are selling them—sometimes in misleading ways that are hard to detect. This one, for example, has nothing negative at all to say about Camtasia. This comparison here seems a bit more reliable and points to Camtasia’s high upfront costs as a drawback.

Since evaluating and testing new screen recording tools is such a time consuming process, it might make sense for a UX researcher to simply rely on the platforms they already know and use for other personal and professional reasons. Ubiquitous tools like Zoom and Apple’s facetime allow for screen sharing and Zoom has recording options as well. (Facetime recording is possible by adding an app to the device.)

One productivity platform used by over 300 million people around the world is Microsoft’s Teams. It too has screen sharing and recording capabilities. Since this is a platform I have been required to use in the past and have come to know a bit better than other options, I decided to review it through the lens of a UX researcher/designer. My opinion is that Teams is an effective and reasonably easy to operate option for screen sharing and recording, especially for people who already have access through institutions and organizations that are already paying the licensing fees to use the product.

Below are some more detailed thoughts and a demo of using Teams for UX research purposes.

 

There are at least two ways to start a new session: “Meet Now” and “Add New Meeting”, but if the purpose is to get more than one person together with some advanced notice, the “Add New Meeting” option provides the means to invite others and directly syncs with their Outlook calendars (assuming they have and use the related MS Outlook for email). The other “Meet Now” option works quickly and allows the meeting creator to invite users to a meeting in progress. Either way, it is relatively easy to invite others to a meeting session. If the guests are also part of the organization, they can be invited with a search feature. If the invited guests are not part of the organization, they can still join, but the meeting creator must send them a link.

I am sure as a UX researcher, an important goal is to make the process of joining a session as easy as possible. As Teams continues to gain market share as a productivity platform for organizations, I think it could overtake Zoom and others as the platform of choice for people who want to record screen sharing sessions. Using the most well known and most used platform is a good move for a UX researcher, even if doing so sacrifices some functions available from other tools. Research participants should not have to download new apps and learn new interfaces. Placing those barriers in the way will only make it harder for a UX researcher to get willing subjects.

Because of Team’s design as meeting facilitating tool, a meeting session allows multiple people to join and view the meeting. I have been in meetings with around 50 people before. I don’t know how many one meeting can handle, but I would imagine other platforms could be more limited than Teams in this regard.

It is important to note that Teams meetings, like Zoom calls, can be run from a browser or an installed app. I have found this feature especially helpful when one or the other gives me issues. It’s nice to have the option to switch, usually from the app to the browser in my case. Of course, for invited guests who do not have the Teams app, the browser option should still allow them to join and participate.

I did notice as I created a demo meeting using “Add New Meeting” for this review that there was an option to “record automatically.” That seems helpful, because I have had sessions before that I meant to record and forgot. A pop-up once the meeting is started that asks if the user would like to record would be even better, but I do not think I have seen this happen before. Of course, the downside to the pre-meeting record setting is that one might get caught unaware of being recorded after the meeting has already started. 

Teams meetings offer users dozens of background filters that range from a simple blurred background to professional looking office backgrounds to distracting animations. These are helpful for privacy purposes and add an element of fun as well, but the animated backgrounds could pose problems for researchers and participants who need to focus on a specific artifact.

For a look at some of the features of Teams screen recordings offer during a session, I created the following demo:

UX Demo 3 with Jason Palmer-20240202_152729-Meeting Recording.mp4

I get the feeling that most screen recording tools are about the same. It might make sense for UX researchers to just pick one they are already somewhat familiar with (like Teams, or Zoom) and learn how to maximize its utility–as opposed to spending their time shopping around and trying to learn new interfaces and functions.

For me, Teams seems like a competent and effective tool with enough versatility for most UX research purposes.

Week 3 Field Notes

The Five Second Test (for sample To-Do List)

  1. What was the app for? Looks like it could be a search feature of some sort.
  2. Did you want to use it? Yes, even though I did not know exactly what it was, the simplicity of the design was inviting—probably because it looked to me like I wouldn’t screw anything up by messing around with it.
  3. If so, what would you do first? I would probably type “test” in the text field and click the button to see what happens.

 

  1. Note anything relevant that occurred to you during our discussion.

Students seemed to have some similar thoughts and reactions to the tool. As we discussed the artifact, I was reminded of the many focus group studies I participated in in the 2010s as I sought to supplement my income and pay off student loans. I learned so much about market research and facilitating focus groups from these.

  1. Note your thoughts about 5 second testing as a Usability tool.

The 5 second test is useful because we know that many prospective users/buyers are going to make decisions based on very initial impressions of a product. The book gets judged by its cover whether we like it or not. Therefore, seeking the first thoughts one has provides the designer the chance to better evaluate the product and make necessary changes. With enough data from a strong enough sample of 5 second users, the designer should be able to figure out which elements are most appealing to the largest number of potential users.

Heuristic analysis

  1. visibility of system status — keep users informed about behind the screen processing, loading, successfully uploaded, searching please wait

The importance of status visibility depends on the artifact, but the spinning wheel of death, as it is sometimes called, is one way software/web tools let us know something is going on and we (as users) do not need to do anything else—yet. I do think the visuals that show percentages of status progress are far more helpful than those that do not.

  1. use familiar, real world, language — no jargon, no site-specific lingo

Again, this will depend on the artifact and audience, but in the case of a basic to do list meant for the general public, this is solid advice. They can’t use it easily if they don’t understand it.

  1. users should be in control — nothing relevant to the experience should be happening behind the screen

This hints at transparency and data security too. We can have users forfeit control (of their data and usage behavior) by including a long terms and conditions page (that they won’t read) and have them mindlessly click “agree” so the artifact can take control of whatever benefits the designer. I do not think this kind of unethical strategy should be employed, but it does feel more like the norm today.

  1. follow industry standards — CNTRL S means save regardless of platform or Cmd because Apple

It is also important to keep up with changing standards then. People (young ones, especially) are always tweaking the way technology is used, and if a designer only sticks to the standards of their time, they will be missing opportunities to improve their products usability.

  1. don’t let users make mistakes — multiple levels of undo, popup in place warnings about required form fields, greyed out representation of features unavailable in the current context

This means that the designer needs as much usage data as possible to see what mistakes can be made. This is hard to predict. Just like how people can find creative ways to use a product that were completely unexpected, they can also find an infinite number of ways to misuse a product so that it doesn’t work at all.

  1. recognition over recall — don’t make users remember or have to think

The whole point of technology is to make tasks easier. The harder the user has to work or think, the less need they are going to have for that particular tech.

  1. flexible designs — experts should have shortcuts and other tools that aren’t visible to novices who will be distracted or confused by them

Interesting concept. Not sure I agree completely, but it does make sense to at least let users know in some logical way that advanced features are available to more expert users. We see this many times indicated with an “advanced” button that reveals those features. That seems like a good application of this design principle.

  1. minimalist design — don’t clutter the screen, don’t add images as decoration

Yes! Yes! Yes! I can’t imagine anyone loves their screens cluttered with ads and things they don’t need. This should be #1 on the list. Only add what is needed. As for not adding images for decoration, I might argue that some images that appear to be decoration are really serving some important other purpose as well, like appealing to emotion. A picture of a tropical island might have the effect of calming the user, and paired with music (that some would initially think is superfluous) could actually have a positive effect on the user’s experience.

  1. no error should be fatal — offer clear signposts and ways to start over efficiently, auto populate form data when possible

I have not put much thought into this yet, but I can see how one fatal error would lose a user for good. As for auto-populating, that is helpful most of the time. I always appreciate when “United States” is listed at the top of a dropdown that is otherwise alphabetical. I hope for users in other countries, the designers make their country rise to the top based on ip address or something—especially for our friends in Zambia.

  1. provide help — but design so no one needs it (and assume no one will read it)

But what help will the user need? How many avenues of help are enough? Chatbot features can sometimes be helpful, but they have a history of being frustrating too. Not to mention they can clutter a screen. I still appreciate a live chat with a human assistant, but those are not always practical. FAQ pages are hit or miss; the most thorough can be too long to work through, and the more basic will miss too many common issues. User input is essential to determine where the help should be provided.

UX Project: Fully Functional Demo Website Coded by ChatGPT

Demo website created with ChatGPT; click image to view working demo.

I have never written a line of computer code in my life. I don’t know the first thing about the differences between HTML and Java, or any other software languages. But harnessing the power of ChatGPT today, I created the website pictured above in less than an hour. Mind you, this was not some drag and drop, customizable website template process like many website builders offer (Wix and WordPress, for example). I have used those without breaking a sweat before (this WordPress site you are reading is an example), but those are not coding efforts. What the free version of ChatGPT was able to create for me was functional HTML code generated completely from scratch (or rather, translated and generated from my written English language prompts). I then pasted the code into the free web “space” offered by W3Schools and voila! Operational website.

The UX class assignment I was working on gave me the option to create a mockup of a to-do list application/website (think basic non-operational image) or go a step further and play around with GenAI to see if I could create a working model. Despite my fascination with ChatGPT, and my frequent use and exploration of it over the past year as an educational tool, I was hesitant to use it for coding purposes. I was intimidated. A quick demo at the end our online class meeting changed my posture though (thanks Franco and Dr. Pullman!). 

Here is a summary of how I created my online to-do list website.

  1. Set up account at W3Schools (<5 minutes).
  2. Went to ChatGPT and prompted it as follows: “Please create a web page in html that includes a to-do list feature that allows the user to add items to one of three catagories, each of which has a column on the page. Name the three categories home, teaching, and scholarship. Provide me the html code.” (<5 minutes)  *I still have no clue what HTML code is or how it works. I only asked it to use HTML because I thought I heard someone say it is the easiest or most basic code to work with.
  3. Copy the code generated by ChatGPT, and then go back to W3Schools, create “New Root File” and paste the code into the the text box. (2 minutes) *This was the only tricky step for me, and it was the step I was scared of before seeing my classmate do it. The user interface of W3Schools might make sense to people who know a little bit about coding and web design, but to me, a complete coding novice, it was not intuitive.
  4. Click “Run.”
  5. Preview the newly born website.
  6. Go back to ChatGPT, ask it to make specific changes and rewrite the entire code. Then go to W3Schools, delete the root file and replace with the new code. (40 minutes) *I did this probably six or seven times, requesting one or two changes each time, and then testing it out. 

          Here is a link to the entire conversation I had with ChatGPT if you want to see all of the prompts I              gave it. Note: I do communicate with the machine as if it is a human assistant/tutor/student, so I              typically use pleases, thank yous, etc. I do this mostly because I don’t want to develop less polite                  communication habits, not because I think of the machine as a person–but I don’t claim to be                    completely immune to the ELIZA effect either.

Key Takeaway

When I found out last year that ChatGPT could write functional code when prompted by English text, I had a hunch that English (or any common spoken/written human language) had just become the new computer language. But I then read somewhere (wish I could remember where I saw it) that English could never really replace the learned skill of computer coding because the English language is too ambiguous, vague, and complex to work on the logic level of a computer. That sounded plausible, so I shelfed my hypothesis hunch and left it for another day. Well, today is that day. I am now convinced that the “skill” of learning, understanding, and writing computer code can be almost completely replaced by the combination of GenAI and effective English prompt engineering. I do understand that there are still critical reasons why some people need to understand and practice coding, but the number of those people will drop precipitously now as non-techies like me partner up with GenAI to do what most of us never dreamed we could do in a digital space–at least not without hours upon hours of instruction in a host of computer languages. The new computer language is indeed English: specifically, concise English delivered repeatedly and patiently to a non-human audience with superhuman coding abilities. Whoever said English wouldn’t replace coding languages because it is too ambiguous, vague, etc. failed to realize that users of English can be insightful, creative, and persistent enough to overcome those inherent problems with the language. They also failed to realize that generative AI is (often, but not always) incredibly competent at making sense out of human language inputs. 

UX Takeaways

ChatGPT and other GenAI make the designer’s job way easier, but only if that designer can communicate properly with the machine and/or have some patience and recognize that designing with AI as co-pilot is an iterative process.

A designer still needs a solid plan and then decent feedback to make appropriate improvements. AI can provide both if prompted correctly, but human feedback should always be sought to help the designer catch their own blind spots.

Examining Some UX Case Studies and Creating a Case Study for ChatGPT, with ChatGPT

After looking at a handful of UX case studies, it appears that some combination of the following components comprise an effective study: 

Design Statement

Needs Assessment/Interviews with prospective users

Functionality requirements and constraints

Benchmarks (competitors)/Competitive Analysis

Personas

Storyboards/Wireframing

Task flows

Prototype drawings

Example of user experience/Test uses and findings

But some studies do a better job than others at grouping these components under broader headings that make the project easier to follow sequentially and thematically. The airbnb study described the process in these five steps: Explore, Define, Ideate, Develop & Learn. This seems to capture the generalities of a bunch of the elements that case studies have in common. The “Problem, Solution, Revenue” model seems like another good way to organize a case study–although I find it interesting that the revenue portion comes last. I would assume most “successful” developers allow the revenue stream to dictate the solution–for better or worse.

One case study that caught my attention is the Wikipedia study. I found this study especially interesting because, along with millions of other people, I am very familiar with the iconic interface and even appreciate the simplicity of its design. I couldn’t imagine it any other way–but many people more insightful and imaginative than I am have been making changes to the design of Wikipedia sites for years (there are actually hundreds different versions of Wikipedia).  One of the key takeaways from this study is the idea that fully functional prototypes “(versus mockups, or even a click-through prototypes) communicate designs significantly more clearly, and facilitate higher quality discussions and feedback.” Of course this comes at a steeper investment of time and resources, but the lesson should push UX researchers to strive beyond basic sketches and mockups. This Wikipedia case study is less complete than other examples, but it is worth a look because the components that are discussed are in-depth and relatable for anyone who has used Wikipedia extensively.

UX Case Study Designed by ChatGPT

I asked ChatGPT to design a UX case study (with itself as the subject). Here is the specific prompt I gave:

Me: Using a “Problem, Solution, Revenue” model, please create a UX case study for the ChatGPT website. (1/12/2024)

ChatGPT:

Title: Enhancing User Experience on ChatGPT Website

Problem: The ChatGPT website currently faces several user experience challenges, hindering optimal user engagement and satisfaction. Users often find it difficult to navigate through the website, locate relevant information, and understand the full range of capabilities offered by ChatGPT. Additionally, there is a lack of interactive elements that could enhance user interaction and make the website more user-friendly.

Solution: To address these challenges, we propose a comprehensive redesign of the ChatGPT website, focusing on improving navigation, providing clear information, and introducing interactive elements. The redesign will follow a user-centric approach, ensuring that visitors can easily access the information they need and understand the capabilities of ChatGPT.

  • Redesigned Homepage:
    • Clear and concise messaging to explain ChatGPT’s capabilities.
    • Streamlined navigation to guide users to different sections.
  • Interactive Demonstration:
    • A live demo section allowing users to interact with ChatGPT in real-time.
    • Clear instructions and examples to showcase the model’s capabilities.
  • Information Hierarchy:
    • Organize information in a clear hierarchy, making it easy for users to find what they’re looking for.
    • Use visual cues and concise text to communicate key points.
  • User Onboarding:
    • Implement a user-friendly onboarding process to guide new users through ChatGPT’s features.
    • Provide tooltips and hints to help users navigate and understand the platform.
  • Responsive Design:
    • Ensure the website is fully responsive across various devices, providing a seamless experience on desktop, tablets, and mobile devices.

Revenue: The improved user experience is expected to lead to increased user engagement, longer session durations, and higher user satisfaction. This, in turn, can contribute to higher conversion rates for premium features and services offered by ChatGPT. By attracting and retaining more users through an enhanced website experience, we anticipate a positive impact on overall revenue generation.

Outcome: The redesigned ChatGPT website, with a focus on improved user experience, is expected to address the current challenges and create a more inviting and interactive platform. Users will have a clearer understanding of ChatGPT’s capabilities, leading to increased user satisfaction and potential revenue growth. Continuous monitoring and user feedback will be crucial to refining the website further and ensuring an ongoing positive user experience.

###

To start, I think this outline is far too vague to be very helpful. Maybe a smart UX designer could work with the initial problem analysis in the outline and make a bunch of personal judgements about specifics, but the examples I looked at were far more specific in identifying needs and problems. I shouldn’t be too surprised that the machine didn’t produce something more specifically critical about itself, but I did have higher hopes. Of course, there is always the chance that my own prompt wasn’t specific enough to start with.

What is especially interesting about the outline created by ChatGPT is how it suggests a revenue strategy that relies on increased user engagement. On its face, this seems at least a little specific, but I am also willing to bet the machine is just pulling a probability here  from its training data related to how tech companies generally drive their income, as opposed to evaluating or at least balancing the best ethical choice for the end-users.  The maximum-engagement model runs completely counter to what Sam Altman (CEO of OpenAI, ChatGPT’s parent company) said in congressional testimony last year. Altman, responding to concerns about tech addiction issues, claimed that “we try to design systems that do not maximize engagement.” His reasoning was that they lacked sufficient computing resources to even support more engagement. But like many (most?) other tech companies, OpenAI is now embracing the increased engagement model to drive revenue. This was even specifically mentioned on their recently released ChatGPT store. So the fact that ChatGPT included this typical revenue model in my request for a UX case study is not altogether surprising, but it does go directly against the previously stated claims of the company’s CEO. 

This idea is important for UX studies because designers have competing factors that drive their designs. Two major factors that can find themselves in conflict are the maximizing of revenue and the maximizing of benefit the product brings to the user. Very often these two factors can overlap, and that is best case scenario for all. When a product is tweaked or designed in way that makes it both cheaper to produce and safer or more enjoyable for the consumer, that is ideal. But when a company prioritizes profit over the user’s best interests, we can end up with tech products that cause problems in a user’s life that could outweigh the benefits. Screen addiction for kids and negative mental health outcomes for users are two potential downsides that ChatGPT initially ignores. This seems incredibly important for UX designers to keep in mind as they go about their research, planning, and execution.

Here my second prompt and response from ChatGPT. I was after a better (more specific and more user-focused revenue strategy) UX case study:

Me: Can you re-do that and be far more specific about the problems, and then instead of simply suggesting a revenue stream based on maximizing engagement, figure out a strategy for a revenue stream that protects the users from the harms and negative outcomes that could come from overuse and overreliance on tech products like ChatGPT.

ChatGPT: 

Title: Mitigating User Risks: A User-Centric Redesign for ChatGPT Website

Problem:

  1. Information Overload:
    • Users struggle to grasp the full scope and capabilities of ChatGPT due to a lack of clear and concise information on the website.
    • Ambiguity around the model’s limitations and potential risks leads to confusion.
  2. Navigational Challenges:
    • The current website structure lacks a logical flow, making it difficult for users to navigate and find relevant information.
    • Users often abandon the site due to frustration with the disorganized layout.
  3. Limited User Guidance:
    • New users lack sufficient guidance on how to interact with ChatGPT, resulting in a suboptimal user experience.
    • Ambiguous onboarding contributes to users feeling overwhelmed and disoriented.

Solution:

  1. Detailed Information Architecture:
    • Create a dedicated section that clearly outlines ChatGPT’s capabilities, limitations, and ethical considerations.
    • Provide examples of appropriate use cases and communicate the importance of responsible AI use.
  2. Streamlined Navigation:
    • Implement a user-friendly navigation system with intuitive labels and a clear hierarchy to improve the overall website structure.
    • Prioritize key sections such as documentation, use cases, and ethical guidelines.
  3. Interactive Onboarding:
    • Develop an interactive onboarding process with step-by-step guidance on using ChatGPT responsibly.
    • Include educational elements to inform users about potential risks and the importance of ethical usage.
  4. User Education Hub:
    • Establish an educational hub containing articles, FAQs, and video content to help users understand the technology better.
    • Address common misconceptions and concerns to foster transparency.

Revenue Strategy: Instead of maximizing engagement without bounds, we propose a revenue strategy that prioritizes user well-being and safeguards against overuse:

  1. Usage Tiers:
    • Introduce tiered pricing plans with usage limits to encourage responsible usage.
    • Offer educational resources to users who approach or exceed their usage limits, promoting awareness of potential negative consequences.
  2. Ethical AI Certification:
    • Create a certification program for businesses and developers, emphasizing responsible and ethical AI usage.
    • Generate revenue through certification fees, with a portion allocated to ongoing user education initiatives.
  3. Premium Support for Ethical Use:
    • Offer premium support plans specifically tailored to guide users in ensuring ethical and responsible AI practices.
    • Revenue generated from premium support can be reinvested into user education and awareness programs.

Outcome: The redesigned ChatGPT website, focusing on user education and responsible usage, aims to mitigate potential risks associated with overuse and overreliance on AI technology. By implementing a revenue strategy that aligns with ethical principles, ChatGPT can foster a community of users who prioritize responsible AI practices, creating a positive impact on both user well-being and the product’s long-term sustainability. Continuous user feedback and iterative improvements will be crucial in refining the approach over time.

###

This is a marked improvement here which makes me think that much of the time when someone finds ChatGPT to be creating poor boilerplate writing, it could very well be due to their choice to accept the first thing the machine spits out. Better prompts create better responses, and a two-step process with a call for specific improvements to a first response yielded much better results for this meta-UX case study outline.

The altered revenue section is intriguing and maybe impressive because a tiered revenue/access model could be an actionable measurable strategy for the company to pursue. This could, at least in theory, have the users’ better interests in mind while still providing a revenue stream that OpenAI could adjust with their own setting of prices. Maybe we shouldn’t be surprised that when we work a little harder on our own instructions and communication with the machine, giving it opportunities to improve like we would any student, we end up with better and better iterations of the original response. Maybe this is what fascinates me most about generative AI: the ability I have to treat it both as a tutor and student.