Week 1: Case Studies
Case Studies
My introduction to case studies as a form of research came a long time ago during my high school years when I was developing an interest in technology based fields as potential avenues for employment, in my AP psychology class, and in a very interesting visit with my Anatomy class to a local cadaver lab to witness how medical students would utilize individual bodies for their educational development. While the cases I witnessed in my Psychology and Anatomy class perhaps aren’t the most applicable to user experience and design (except in some extremely niche scenarios) these early experiences installed in me the confidence that case studies can be a worthwhile endeavor to accelerate development of a product, set of knowledge, or research trajectory.
For the burgeoning researcher or UI/UX professional, gaining an understanding of these case studies is then a necessary component of their education. Thankfully, there is a long history of publicly available UX case studies, newer AI models that help to consolidate a plethora of advice found on the internet, and additional case studies that while publicly available have struggled to reach popularity with the public. In this post, you will find the methodologies that AI and other case study models recommend to create your own case studies and a brief analysis of a case study that I have found that makes it easier to understand what this advice looks like in practice.
Practical and AI Case Study Models
- Background/Problem Statement/Explore
- Identify the problems you’re experiencing with the current model and the needs of the organization. It’s important that you take a moment at the beginning of the process to find what your organization’s needs are or the purpose of your end-product. From there, you might identify very clear problems with your product, those glaring issues that you don’t need user testing to understand, but that user testing might help in finding solutions for. Afterwards, you can begin exploring options for next steps – do you need test new research tools? are there additional needs that you think users might have that you’re currently unaware of? how can you improve upon the user experience?
- In your report on user testing, this may be a point where you begin to define your target audiences, unintended audiences, roles of personnel and users, etc.
- Brainstorming/Ideation
- What I would consider to be the second most important step of your user testing, your brainstorming stage, is intended to provide additional time to expand your user profiles, testing methods, complete preliminary research, and complete low-high fidelity models for users to engage with (through wire frames, mock-ups, drafts of websites or interactive models).
- No matter how much brainstorming you do, you can’t plan for everything. You should plan on doing/making surveys/interviews/observations/personas for a variety of different circumstances, but if additional circumstances present themselves you may need to complete a second case study.
- Design
- The design of your case study, like the brainstorming stage, should take into account a variety of scenarios. Particularly, you need to consider what your users are going to be using your product for, and the variety of uses that it could potentially have. Using your wireframes, mockups, and personas developed during the brainstorming phase, administer these tools to research participants and have them complete tasks that their personas would want to use your product for.
- You might have users do a variety of tests, both to determine the efficacy of your design and the usability of the design. Before doing your testing/research, identify ways to generate findings around your products problems, strengths, questionable areas, and variety of uses to get a holistic picture of how users perceive the product at this stage of development.
- User Research and Methods
- Once you have completed your brainstorming, drafts, mock-ups, wireframes, personas, and testing design you can move onto the actual usability testing phase. At this stage, follow the research agenda dictated in the design phase, if alterations need to be made to the research plan you can complete additional tests at a later time.
- Evaluate Findings
- After accumulating data, potentially both quantitative and qualitative, determine how that data might inform later stages of product development. Rather than following intuition on how you might continue to improve your product, allow the data to speak for itself. Your users have given you what you need to implement clear solutions, or you may have asked them directly how they would solve problems they encountered, to create solutions that improve user engagement, satisfaction, conversion rates, etc.
- Reflection
- Once all is said and done, you need to reflect on your testing processes. Review each step of the process and decide on whether or not that step was successful, what made it successful, what you might change in future case studies, or what that step failed to communicate to you or users. This might look like you realizing that wireframes simply weren’t enough to understand how users would engage with the final product, so in future testing your might work from a functional draft of the final product.
- For example, you start creating a new website, so you want to conduct user testing before you get too far into the process and waste important labor. You create a card sorting activity and a wire frame of how you intend to create the website, this testing might yield information about the organization that users would prefer, but it can’t tell you about the functionality of the end product. In this case, you would need to later create a high fidelity model that has some base level functionality to conduct further tests from.
Sources:
Dr. Pullman’s conversations with Chat GPT and Copilot (https://www.gpullman.com/8122/cases.php?cases)
https://uxdesign.cc/airbnb-redesigning-for-the-new-normal-66fb273de769
https://www.behance.net/gallery/126901637/HAVEN-UXUI-Case-Study
Examining an Independent Case Study
To provide some context for this case study, Dan Jenrette discusses the user experience testing that was conducted on what has widely been considered one of the biggest failed launches of a World of Warcraft expansion in the last 20 years. World of Warcraft: Battle for Azeroth was the 8th installment of the franchise, and thus there may be an assumption that Blizzard entertainment already had a clear conception of how they “should” do user experience testing. From this they identified a couple of key areas and metrics that they considered to be of the utmost importance:
Key Areas:
- The “First Hour” experience, wherein players would first be introduced to the story of the expansion, basic gameplay mechanics, and introduction to new areas of the world.
- Dungeon Content – The core of the leveling and end-game experience, has been here since the very start of the game back in 2004.
- Island Expedition Content – New content that is both players versus ai, or players versus other players.
- Warfronts – Reminiscent of the original Warcraft games, where players would build a base and defend or attack against an enemy. Designed to try to emulate the feeling of a larger battle or ongoing war.
Within these areas, quality assurance personnel were focused on player retention, how “fun” something was, pain points that players might experience during leveling and endgame content, what killed the most players on their journey, what were some common player successes, how players felt about artificial intelligence in island expeditions, etc.
To test these things, they used multistage testing, focusing first on usability, then playtesting, then a post-test survey to understand user reactions. From this testing they were able to adequately measure data around how users felt about various dungeon and raid (long form dungeons at a higher difficulty with more players) bosses, how they felt about AI and specifically how they felt about particular AI they encountered during island expeditions, how they felt that Warfronts were engaging and fun content that emulated large scale battles, and more. However, as Jenrette notes early on, user testing was often limited during the development stages to in-house staff in departments that weren’t working on the game such as accounting, human resources, staff on other games, etc. This caused the downfall of the expansion in the eyes of many of the actual players who would later play the expansion, and Jenrette corroborates this, recognizing that in their reflection that even when they identified crucial issues in gameplay from beta testing (testing done closer to launch of the game to make sure that servers could meet demand and that the game was, at its baseline, functional), it was already too late to go back and make changes to improve player experience.
The result was heavy criticism from players, notably from longtime players that noted that the new expansion stripped away too many of the systems from the previous expansion that created engaging gameplay, there was a lack of player freedom in their choices to pursue different types of gear (the Azerite armor system was undertested and underwent numerous changes over the course of the expansion), Warfronts and Island Expeditions had their difficulty toned down after internal testing and thus many players thought they were unrewarding, they installed endless endgame “grinds” (endless content that provided meaningful rewards) at multiple stages to drive up user engagement, etc. From this case study, and the ensuing response, it seems clear that it’s not enough to do your user research at the development stages of a project, it’s an ongoing process that needs to consider how every change might impact users’ experiences.
The Reddit thread below consolidates many of these criticisms.
(Content warning for language, mentions of genocide, misogyny, etc.)
Why is/was BfA one of the most hated expansions?
byu/HerrMatthew inwownoob