Dylan Maroney

UncategorizedArchive

Apr 13

Moving Forward from Last Week

Last week I wrote about the struggle of putting together a survey and receiving limited response while I was at CCCC. This resulted in a bit of stress as we moved toward the final weeks of the semester, and I most notably realized that having an alternative plan in place to jump over that hurdle was necessary – and may still be a component of the final report for my project. However, through the grace of the chair of the Literacy Studies Interest Group at CCCC, my survey was sent out to a larger listserv garnering more attention from graduate students and researchers are various institutions. This increased my respondents to 7, with most indicating an interest in a follow up within the next week. With 7 responses, I should be able to make a more focused determination on stakeholder needs when I bring the data over to the report at the end of the semester.

What to Do With the Survey Data

The surveys have indicated a clear need for certain planned enhancements to the sites, such as the need for more consistent keyword identification, enhanced search and sorting functionality, potential need for a glossary and working bibliography of recent research, and surprisingly a lack of interest in collections of curated materials. This data is somewhat surprising, as when I engage in archival research having curated collections tends to enhance my research process, but the same effects could be achieved with more consistent keywords across existing narratives. Through upcoming interviews, this will be a key area of interest as I talk to respondents about their needs. Beyond that, more consistent responses indicate personas for future user experience research too. Respondents all indicated that they are either pursuing a Ph.D. or already have one; our audience is potentially made up of highly educated researchers far more than professional stakeholders, or as a result of networking the potential list of respondents grew to disproportionately represent researchers and grad student stakeholders. As such, it will be pertinent for future user experience research to refocus on professional stakeholders as we move forward, but this should be enough information to make decisions about what we focus on as we pursue the NEH’s DHA grant. 

The Importance of Networking

Getting to this point would have been impossible if not for networking. Talking to people who have a vested interest in the project’s success are vital to ongoing user research, and finding places where such stakeholders congregate may be a component of my user experience methods moving forward. By positioning myself around my stakeholders, by talking to them about upcoming plans to enhance the websites the DALN is hosted on, by demonstrating my own passion for the project, it is likely that such stakeholders were influenced to both take the survey, and disburse the survey to a wider audience. Thus, I will be prioritizing networking in the future for user experience research in future projects, as it seems to be the most effective way of both adopting the ethos of those close to my stakeholder population, as well as reducing the need to send out individual survey links to people I’ve never talked to before. With the ongoing increase in cybersecurity related issues facing universities, clicking on a suspicious link in a random email from a grad student you’ve never heard of may be too much of an ask for most people and there can be no expectation that they take the time to search up “Dylan Maroney GSU” to see if I actually exist. Therefore, I can’t overstate how vital networking has been and will be moving forward. 

Apr 08

CCCC: Context for following sections

This past week I’ve been attending CCCC which has impeded my ability to progress on this project a bit, and advance it in other ways. The biggest hurdle that CCCC presented was that I ended up delaying sending out my survey because I knew there would be people at the conference that would be able to provide key stakeholder positions on the site’s development, and I wanted to avoid over-scheduling interviews which would inhibit my ability to reach those primary stakeholders and get their detailed feedback for the platform.

Revising Participants List

In addition to the previous stakeholders that I sent the survey out to, I also connected with the Literacy Studies Special Interest Group at CCCC and sent the survey to an additional 5 potential users of the archive (more likely to use it than some of the previous participants). While I wait for responses from these stakeholders, and from others, I am going to plan out interview questions based on the survey responses I’ve already gotten (only two which are at odds with each other’s opinion). It’s worth noting, though, that CCCC just ended. It will pay to be patient and not stress about not receiving responses just 24 hours after sending the survey to potential new participants. 

Waiting for Responses and Alternative Plans if Needed

Since our last class meeting is scheduled for the 22nd, I intend to also analyze existing scholarship, the researchers behind it, external contributors to the blog and their current positions and research interests, and the user data that we have from site analytics to develop personas. I think a multifaceted approach like this should give us an opportunity to develop the site in a way that responds to stakeholder needs when the stakeholders don’t have the time to respond to survey regarding those needs. The concern is that by developing such personas I will be leaving out graduate students, however graduate students seem to be the only ones currently responding to the request I sent out! As such, it seems pertinent to develop personas primarily for the participants I wished for but haven’t been able to get ahold of.

I already acknowledged such researchers like Deborah Kuzawa, Moira Connelly, Kara Poe Alexander, Alison Turner, and Jessica Pauszek, all of whom serve in different roles at their respective universities with literacy studies being a connecting thread. These personas may help inform blog content development, archival content curation, meta data that should be included in each narrative (from keywords utilized in publications and search terms used), and future conference and publication planning according to interdisciplinary cases such as writing center, WAC, FYC, rhetoric, and professional/technical writing crossovers. 

Mar 31

Purpose and Goals of This Week

This week, I wanted to develop a survey to send out to potential participants. While I’m waiting to hear back from some of the people I reached out to, I should be able to get a decent idea of how different stakeholders would engage with the project even if not every potential participant I listed last week is able to contribute to this case study. So, this survey acts as a way getting baseline level of information about participants’ preferences for their databases and archives, whether or not they see value in the blog component of the site, and if they would be open to an interview later down the line. This post includes a link to the google form sent to potential participants and a rationale for each of the questions included. 

Link to Survey

https://forms.gle/kfW182MVQUpXWZjS9

Survey Rationale

What is your received level of education? If still in school, what degree are you currently pursuing?

While it makes little sense to record demographic information here in regard to gender, age, ethnicity, sexuality, and all the other various categories that surveys seem to want to record, it’s crucial to this case study to organize participants around what they could potentially be using the site for. With that consideration in mind, it’s pertinent to record the education level of participants to understand the context of which the site will likely be used. In the case of undergraduates for example, they would likely be using the database for professional or assignment related purposes, at the MA level similar circumstances with the addition of the pursuit of conference and publication opportunities, and after a Ph.D. has been attained most likely it will be used solely for research with the added context of either scholarly or professional research. 

What are your research interests?

Similar to the first question, this question is asked with the intention of understanding whether participants are interested in literacy studies or not, and how that might effect their perceptions of the site. 

Do you have any experience with archival research?

Because the site is primarily designed as an archive, it’s crucial to record whether or not participants have experience with this type of research. This will also influence future questions that pertain to the sorts of content that participants would want on the site in addition to the content currently available. One such type of content is a glossary of common concepts and terms (similar to Dr. Pullman’s website for this class), which would make it easier for such potential users to engage with the archive and blog in a meaningful way. 

Are you familiar with literacy studies as a discipline, including common terms and concepts?

This question seeks to determine users’ current understanding of the discipline that the content of the DALN site is committed to. In the case that the user isn’t familiar with the topic, the follow-up questions are designed to better understand how they might begin engaging with content unfamiliar to them. This is particularly important to understand how users would be engaging with content off-site and how we can then cater blog and source details depending on user understanding. For example, if a narrative includes an example of literacy sponsorship, should we assume that users have already read Deborah Brandt’s foundational article, “Sponsors of Literacy”? 

How do you approach new research topics and interests? 

Generalized question with the aim of better understanding answers to the previous question and potential user behavior as they approach a potentially unfamiliar topic. This is designated mostly to the non-expert audience and their approach to what will be an unfamiliar discipline to them. Keeping this open ended for those familiar with literacy studies also adds some extra perspective on how participants more generally approach new topics which could improve the experiences of non-expert users. 

Would having something akin to a glossary of common terms and concepts be useful when engaging with something like literacy studies. or would you prefer a bibliography of key texts? Select N/A if this is already an area of interest.

This final follow up to the questions related to expert vs. non-expert users seeks to simply ask if a new addition to available content in the form of a glossary would be beneficial. N/A and Not Necessary are included as options for expert users and uninterested non-expert users respectively, but this question will likely indicate a type of content that would benefit those new to the discipline and make the site more compelling to those unfamiliar with literacy studies.

When using an archive or database, what is most important to you?

I’ve gone back and forth with this question, not because of what it reveals about the participants but whether or not it should be responded to through check boxes or multiple choice. I went with check boxes due to the sheer amount of choices that are available. The choices are set up through common aspects of archives and databases, such as how easy it is to find sources that you’re looking for, search functionality, and identifying characteristics for each source. Answers to this question should help determine whether or not there needs to be further development to the site, or if there needs to be more concerted efforts on the backend to add details to each source.

A major component of the DALN is our blog where we post updates on how the project is being used pedagogically, in research, and any updates regarding the project’s development. Is this of interest to you? 

A simple question that asks users if the blog is of interest to them. Due to the variety of potential users, description has been added to indicate the types of content regularly posted.

How would you prefer the blog be designed?

While traditionally blogs are set up to be experienced in chronological order, the variety of content posted makes it difficult to simply say “Let’s keep it in chronological order and hope users can find what they’re looking for.” Instead, the question offers a couple different options for users to emphasize their preferences.

If you don’t see yourself using the blog in its current form, is there any content you’d like to see added to it?

As an extension of the previous question, if there’s different types of content that participants might be interested in that wasn’t listed in the prior question, what would they like to see instead? The goal, ideally, is to see how users in different disciplines might engage with the blog if content more related to their interests was included. This is mostly aimed at the participants whose answers I’m already anticipating – discussions of AI, neuroscience, and psychology may be of interest to those users whose interests deviate from the core of literacy studies. 

Are you open to an interview in the timeframe of 4/1-4/14? If so, please include potential availability.

Basic, open-ended question regarding interviewing and scheduling said interview.

Mar 24

Planning for the Week

This week has been entirely dedicated to the final preparations that I felt that I would need to go through to be successful in the upcoming weeks. A key consideration that has come up from past assignments is that I will need to carefully consider who will be valuable as a participant, and so here I’ve determined from prior feedback and my own reflections on those prior assignments who I will be contacting to potentially participate in this study. 

 

From Feedback

Dr. Pullman has directed a good bit of feedback in our earlier assignments to helping me narrow down potential participants for this project. In those early tasks, I employed the assistance of my roommate (a graduated  M.A. in Neuroscience) and a fellow graduate student in the English department (3rd year Ph.D. candidate/student), while both of these audiences have potential, they don’t create a holistic image of the DALN’s needs. So, as I’ve been grappling with determining a solid direction for the early stages of this project, and what I’ve gathered from some of my reading, is that I would like to have multiple participants as representatives of the same overarching groups that characterize more specific identities within the personas. As a result, I’ve come to the conclusion that I’d like to have three to four participants over three categories: graduate students interested in learning more about literacy (broken down into those that aren’t familiar with the literacy studies discipline, those that are familiar with archives and archival research, and another that has some familiarity with the discipline), scholars in English studies (broken down into those that have published in the discipline, those that are familiar with modern social science research methods, and another that is an expert in archival research and development), and finally professionals outside of academic research roles that have a stake in seeing the literacy development of others (university administrators, tutors/tutoring service providers, teachers, etc.). By breaking my participants down into these three groups, I can focus on discovering information that can lead to improvements for each group independently, and provide guidance on when stakeholder needs overlap with others. For example, improved search functionality may benefit all groups, while consolidating the blog and archive sites might only have substantial benefits for  teachers and administrators. 

 

Reviewing 10 Minute Test and Interview to Determine Necessary Participants

What I saw during the 10 Minute Test and Interview was that there is a need to produce additional documentation for participants, especially when they have limited understanding of what the DALN is or what it’s used for. To this end, I will be preparing for my work with the aforementioned participants by preparing some basic documentation that will help them complete tasks associated with my user testing. I will need to create at least three handouts detailing tasks to be completed during each testing session for the different user groups, a survey that tracks current understandings of what the DALN is and what its intended purposes are, and finally a final post-test survey that takes into account novice users and frames questions with that perspective in mind (instead of “What would you change?”, a question that shows understanding that this is a new experience such as “What were some of the hurdles that you came across trying to complete X task?”). While it might be good to abandon the desire to have a completely novice participant, I think it will still be valuable to include that perspective in mind, so that other brand new users aren’t left in the dark during development. 

Who I am Planning on Reaching Out to

Graduate Students

In this category, I plan on requesting the efforts of the same participants as in the 10 Minute Test and Interview as well as one new participant. Eimhear Davis and Fikko Soenanta will be able to fit into the not-familiar and somewhat-familiar categories well, and I will be consulting with Dr. Ben McCorkle and Dr. Katie Comer to see if they have anyone who will be interested in participating at Ohio State University or Portland State University. In the meantime, I will also be reaching out to one of my peers, Rachel Woods, for her experience in archival research which should provide insights into the archival make-up and approach to the project.

 

Scholars in English Studies

For scholars in English studies, I plan on consulting similar identities to those in the graduate students group. To this end, I plan on enlisting the help of Dr. Ashley Holmes for her prior work using the DALN in a class of hers on research methods, Dr. Lynée Gaillet for her expertise in archival research methods and development, and finally I will need to find another potential participant from those that are actively using the DALN for their own research. From this list, I am still choosing from Deborah Kuzawa (Ohio State University), Moira Connelly (Pellissippi State Community College), Kara Poe Alexander (Baylor University), Alison Turner (University of Denver), and Jessica Pauszek (Boston College). I’ve included the institutional affiliations because that’s primarily where my struggles in choosing an individual have come from. Kuzawa being at Ohio State may make it easier to leverage existing relationships to get into contact with her, Moira offers the unique perspective of someone from a community college, Alexander directs Baylor university’s writing center adding to her unique perspective, then both Turner and Pauszek have no affiliation with me or those I’m affiliated with and can still give the perspective of a researcher private universities. All in all, since this project has limited time to be completed, I think I will be choosing to reach out to Kuzawa (who also has overlapping research interests in queer composition as well), but wanted to get some feedback and perspective before going through with it.

Professionals

Professionals have been the hardest to figure out how to recruit for this project. Because I’ve gone straight through my education, I have limited access to the formal roles in which literacy narratives might be valuable to others, but I have some relationships that might be beneficial for this work. In the end, I’ve identified two key individuals who I will be contacting: John Medlock, the current assistant dean for enrollment services at GSU whom I worked with last summer in GSU’s accelerator academy; and second, Leslie Quigless, the director of a private tutoring service that I work for. For the third, I have been wagering the inclusion of my aunt, Jill Zimbler, who could offer a public school teacher’s perspective, but I’m not sure if this would be the best inclusion for this project either.

 

Nonetheless, this planning week has been fruitful and given me a solid trajectory to start implementing the project in a more effective capacity!

Mar 18

As I’ve been trying to figure out exactly what I want to do for this project, I’ve realized that I want it to overlap with some of the other things that I’ve been working on that I wasn’t planning on doing user experience research for. To this end, I plan on expanding the work I did for the early 10 minute testing activity and the interviewing activity to fully examine the Digital Archive of Literacy Narratives, or at the very least conduct research activities that will help inform us about the audiences we’re targeting with site changes that have the greatest impact. 

Object to Analyze, Hypothesis, and Research Questions

As we work toward solidifying the plans for our archive revitalization, the focus of this will be on the archive site. This will need to include testing to determine who would want to use the site, why each participant would want to use the archive, how they might go about uploading and searching for narratives, a what sorts of content they would be looking for, and where they find themselves when directed to a single destination. The core hypothesis will be if participants are given limited direction with a set of tasks to complete, then under the current iteration of the archive they will have difficulty in doing what they’re asked of. To help maneuver the study in such a way that will be more generative, I will also be using the following research questions to guide further development of study materials.

  1. What are the differences among the different personas?
  2. How do different users use the website and content there differently?
  3. Why might users leave the site and what are they able to find solely on the website?

These questions will help focus my research more on the success/failure rate of tasks among different participants rather than trying to cater testing to any one group. 

Personas and Population

This study will aim to work with consistent study populations at each stage, meaning that I will be limited by the categories that will naturally have less participants assigned to the group. To this end, my current aim is to have 3 participants in each persona group with a total of at least 12 participants total. The four main personas are detailed below. 

Graduate students in English: This group is made up of students and Ph.D. candidates within Georgia State University’s English Department who may have some familiarity with literacy studies, but would be limited depending on their concentration and expertise. 

Graduate students outside of English: This group would be made up of students outside of English studies with a focus on graduate students in intersecting expertise such as critical studies, philosophy, political science, education, communications, and/or psychology.

Researchers/Instructors: This group may have crossover with the Instructors group, so participants might satisfy both roles with different survey and interview questions to answer for each and their own uses for the site. The main difference will be in whether they primarily would use the DALN for supplementing their research with additional primary sources, or if they would be implementing narratives from the archive or associated activities in class. 

 

Data Needed Methods for Acquiring It

I will primarily be acquiring data through surveys and interviews as opposed to observations through screen recorders and journey mapping. This is mainly because of the stage of development the DALN’s revitalization is currently at. However, if it proves to be valuable to do testing on the current iteration of the site (beyond the earlier 10 minute testing that we did), then observations could be an efficient use of time – it will be something that I will need to be aware of as research continues. Despite some question about specific methods, the data I will be looking for will be more expansive demographic data, insights into user motivations and goals when using the platform, details about prior exposure to literacy studies, user feelings about trying to find specific narratives or functions, user expectations when first accessing the site, etc. 

 

Outline for Study

Review 10 Minute Testing assignment and Interviewing Practice assignment to get early perceptions of what will need to be tested – 3/18-3/24

Reach out to potential participants through personal relationships and GSU English department listservs with basic questionnaire to determine fitness for case study – 3/18-3/24

Create and disperse survey to participants including question regarding follow up interviews- 3/25-3/31

Finish refining interview questions using data from surveys and prior assignments to conduct interviews – 4/1-4/7

Compile and refine data from surveys and interviews – 4/8-4/14

Draft report on testing with acknowledgements on general findings, commenting on outliers and potential causes, planning for continued testing, and recommendations on how to move forward with production – 4/8-4/22

 

Mar 10

Overview of the Book

User Experience as Innovative Academic Practice is an edited collection that seeks to demonstrate the value that user experience principles and research methods have in designing course curricula and degree program parameters. To facilitate that approach, Kate Crane and Kelli Cargile Cook enlisted support from instructor-researchers at universities around the US to come to a better understanding of how they are utilizing user experience research to generate insights into their own practices. As a result, the book is primarily constructed as a collection of case studies that act as demonstrations of user experience research and design in praxis, as well as guidance for those reading who would also want to try implementing user experience approaches to program/course assessment. To meet that end, each contribution to the collection not only provides detailed accounts of their unique circumstances and considerations, but also materials that can be adjusted for use by the readers at their own institutions (such as surveys, personas, profiles, infographics, tabulated data, research questions, and more). Naturally, this means that there are three key areas that we need to consider to assess the value of a text like this – who is the book written for? what sort of content makes up the bulk of the text? and finally what is the value of the information and materials provided by the book?

Who the Book is for

This edited collection is primarily for other instructor-researchers or instructor-practitioners in the area of technical and professional communication (TPC) and similar fields such as communication, computer science, and rhetoric and composition. The goal is to provide these audiences with an opportunity to witness how UX principles are applied in a variety of institutional contexts. So, in addition, we might consider university administrators and program representatives (graduate studies directors, department chairs, advisors, writing center directors, etc.) as additional audiences for what the book can offer. This variety of potential audiences elicits opportunities to bridge potential gaps that crop up between any individual’s understanding of the university ecosystem and the reality we often face. 

While these audiences are who Crane and Cook sought as audiences, there are some stakeholders in the university that should be considered when it comes to the advice that they give. Graduate teaching assistants (GTAs), those that often exist in a liminal space of learning how to be effective pedagogues, often wouldn’t have the resources available to commit the time and labor into extensive user experience testing to improve their students’ classroom experiences. As such, finding different avenues for such integral parts of any English or Communication department to implement user experience testing is an imperative placed in the hands of the university they attend and work for. As I talk about in the next section, some of the collection’s contributors offer insights into how GTAs could be given the privilege of participating in departmental assessments. 

Content of the Book

The collection is split up into thirteen chapters, each offering a user experience case study at the authors’ respective institutions. The purpose of each chapter differs, and they tend to vary in how much emphasis is placed on the research methodologies and decisions that were made and the non-generalizable information that helps readers understand the unique context of each university. The result is an amalgamation of content that is more or less useful. Here I provide a brief synopsis of each chapter, focusing on the type of study conducted in hopes that it helps guide you to what you would find most valuable to your own circumstances.

Chapter 1

Chapter 1 acts as an introduction to user experience research and design for teacher-researchers and places the responsibility of information architect on the instructor. As such, instructors must seek to understand the complexity of student experiences and university systems that both support and impede the progress a student makes to their degree. The instructor, then, needs to implement UX methodologies to assess their own part in supporting students. This chapter acts as a rationale for user experience principles being the basis of assessing and critiquing our own practices. 

Chapter 2

Chapter 2 extends this conversation and positions students as not just users of our curricula (products), but as potential participants in the research as student-users or co-creators. As such, having students participating closely with the research process enables you to create dynamic journey maps to understand their path through your class or through a university process such as enrolling in classes or setting an appointment with their advisors. Beyond keeping a tab on students’ progress, having this methodology in mind also allows you to carefully consider the goals of the project, scope of the research, and the contexts that the research is conducted in. From their the journey map could take stops at affinity clustering, observations, interviews, focus groups, prototyping, operative imaging, etc.

Chapter 3

In chapter 3, Martin discusses constructing student profiles as an iterative process. To create effective profiles, Martin implemented surveys, observations, and end of semester student evaluations to best understand who their students were, and from there constructed profiles that effectively captured the likeness of the students in the class. 

Chapter 4

In Chapter 4, Gonzales and Walwema discuss how transliteracies and cultural competencies are invaluable tools for user experience research. It’s not enough to assume diversity and unique perspectives, but instead the researcher must listen and consider how to successfully create intercultural products that meet the needs of the user. In the case of students, they note that creating UX-inspired assignments that places value on student interests and identity culminates in more engaging class activities. 

Chapter 5

Walker, in chapter 5, demonstrates the value of user experience maps that track the needs, expectations, and wants, and potential route to the goal of the user. Similar to how journey maps in chapter 2 were used, Walker utilizes user experience maps as a way of reflecting on the scaffolding of prior user research – triangulating data from surveys, interviews, observations, to then create a persona, to then see how a user that falls under that persona might engage with the product. Most importantly though, allowing users to participate in the research by including them in mapping exercises leads to improved experiences. 

Chapter 6

As we enter into chapter 6, Pihlaja doesn’t hesitate to reiterate on students as participatory users in the research process, and even further concedes ground to them by naming them expert users of the products associated with their classes. Due to their experience, and being much closer to the student experience than instructors often are, conceding expert status to students becomes an important asset in assessing products like syllabi, course schedules, assignment sheets, and rubrics. Although, we might extend this even further to include university websites, course catalogues, university payment systems, program requirements, etc. This user-centered design approach takes that expertise and attempts instead to improve on user experiences based on the recommendations of the de facto experts. 

Chapter 7

Chapter 7 steps out of the classroom and instead focuses on the implementation of an oral communication lab for students to record and practice public speaking. Clark and Austin conduct their research by focusing on multiple metrics to determine the efficacy of the lab: statistics on student usage, measures of student success on assignments after using the lab, student behaviors when using the lab (such as moving furniture, how they dressed, and how they adjusted lighting), and differences between voluntary and compulsory usage based on assignment guidelines. The result was that they could focus less on how they intended on the lab to be used, and more on how students wanted to use the lab, allowing them to better change the space to accommodate student learning. 

Chapter 8

In chapter 8, Thominet offers the first foray into more structured user experience research, maintaining their position of authority as a faculty member and focusing on utilizing surveys and workshops to generate insights into program outcome assessments. With other faculty members, they assigned students to one of two workshops where they would generate a list of things that they believe was most important. The questions given in the workshops ranged in customization for students, faculty, and practitioners, allowing for participants to answer any prompt they wished. This process, Thominet notes, is recursive. They offer a heuristic to help guide other researchers, focusing on listening, problem setting, ideating, and iterating so that potential problems can be identified and rectified as needed. 

Chapter 9

While most of these chapters have focused on studies that were implemented over a year or individual semester, Cargile Cook instead offers an example of how longitudinal studies can generate greater insights over multiple phases of research. While not fully completed, Cargile Cook already was noticing more substantial findings from initial surveys about the program’s efficacy that otherwise would have gone unnoticed under normal program assessment initiatives. 

Chapter 10

While Katsman Breuch et al’s chapter was less user experience heavy – it discussed at length the value of placing students with practitioners in the field. Rather than initially attaching the user experience research to the program, they started off with assessing student needs and building a course assignment that allowed students to work with clients outside of the university and then assessed the assignment afterwards through surveys and interviews. Similar to many pedagogical practices, implementation and then assessment was the approach that was made, and in some ways it seems like the program could have been more effective with more extensive user profiling and data collection activities. 

Chapter 11

In chapter 11, Zachry comes to describe something that other contributors didn’t, the very real issue of double binds. Zachry defines double binds as conflicting policies, feedback, and data that makes it exceptionally difficult to make a decision on how to implement feedback gained during user research. In the context of their university and the classroom, they note most explicitly how students didn’t have an understanding of university policies that could inhibit how assignments were graded, or whether attendance was necessary or not. In the wider context of user experience research, they also acknowledge costs of implementation as a major concern in other user experience praxis. 

Chapter 12

In Bay et al’s study, rather than conduct user experience on student experiences with class activities, they placed students in personnel positions in the study as they sought to understand student perceptions of the technical and professional communication major at their university. So, to teach about user experience, they had students participate both as users and researchers of the “product” of the university. This yielded valuable skills as students had to discover alternative methods of finding alumni from the school (the school wasn’t keeping track) which led to future iterations of research wherein the list allows for future research.

Chapter 13

Masters-Wheeler and Fillenwarth tackle the user experience topic from the administrator angle, arguing for program redesign based on how current and graduated students interact with program faculty, staff, resources, and requirements. The goal was to make the process of moving through the university easier, and their findings reflected that. They acknowledge near the end of the chapter that there was a pressing need to update resources and make it easier for students to understand what was expected of them as the moved toward graduation. Like other contributors, they note that this is an iterative process that needs to reassess user needs as context changes. They give the example of Covid-19 and the abolition of print resources in favor of digital copies.

Efficacy of the Book’s Content

Kate Crane and Kelli Cargile Cook have done something tremendous here, as have all of the contributors to the collection. When we ask ourselves how we can improve on students’ experiences in our courses, or try to make sense of why something was seemingly ignored by half of them, they offer potential avenues to better understand how we can fix those issues. I can’t understate how much this book has made me reconsider my own pedagogical practices, especially in allowing students to take a participatory role in constructing the courses that I teach. 

Mar 02

Introduction

For the interview process this week, I sought out another graduate student at GSU to try to understand one potential persona for my user experience research on the DALN later in the semester. For this purpose, I interviewed Fikko, a 3rd year Ph.D. student in rhetoric and composition who’s research is somewhat relevant to literacy studies but is more in-line with pedagogical applications of AI writing. While the DALN doesn’t yet seem to have anything related specifically to AI and its intersections with literacy, it does offer narratives related to digital literacy and writing with technology. The user testing I did earlier in the semester emphasized that while these users may have some use for what the DALN offers, it may be a difficult resource to engage with by those outside of the discipline. So, this interview set out to accomplish two major goals, to come to an understanding of the graduate student researcher persona, and to glean some perspectives on what those types of users want/need out of their research tools. 

Interview

D: So, I’m gonna ask you a couple questions. It’s only like six/seven questions that you need to answer, so it’s not gonna take too terribly long. So, starting with demographic details, can you tell me about what you do at GSU, your research process, that sort of thing.  

F: Yeah, so I guess my name is Fikko Soenanta. I’m a, third, coming towards fourth year Ph.D. student slash candidate. I am in the Rhet/Comp concentration in the English department and I am done with my course work, so what I mostly do is teach lower division English classes. 

D: What would you say your research interests are? 

F: So my research interests are AI pedagogy, kind of just the users or lack thereof AI in a composition classroom. I think along the lines of both how we can integrate it inot class and converse how we might be able to regulate that as needed. 

D: So as you’re going through your research and acquiring, like, different sources, primary and secondary, what would you say are some of your most common research methods and what sort of databases do you use, or if you use archives what sort of archives do you use? 

F: The nature of my research makes it very difficult to really use a lot of traditional sources that means a lot of academic paper repositories, like Jstor or Ebscohost, have never really worked very well by virtue of my topic being so new that a lot of people haven’t been properly publishing yet, and that means a lot of my research, at least source-wise, ends up coming from a lot of primary sources, a lot of non-academic, non-peer reviewed sources. It’s much more like firsthand accounts with my students, a lot of social media, a lot of just popular articles that are more on the pulse. Some number of that is a good handful of white papers and papers that have already been published on AI.  

D: You mentioned firsthand accounts. When you think about the firsthand accounts that you’re engaging with, I’m guess that you’re mostly engaging in things like surveys, interviews, questionnaires, observations? 

F: A lot of that, yeah.  

D: So, one of the things that the Digital Archive of Literacy Narratives does is collect narratives, so not traditionally scholarly sources, they’re not peer reviewed, but we go through that and verify that they meet the requirements for being uploaded to the archive. So, one of the main purposes of the archive is to collect those firsthand accounts. What are some of the goals that you have when you engage with something like a narrative, something like an interview? What are you looking to gain from those sources, if you can give me a brief rundown?  

F: Yeah, so I think the part here that I can speak of that most relevant is that I’m currently doing like a test run of using AI in my classroom and that there’s a heavy reflection component to basically build into every single submission because one of my records design is that reflective writing is an opportunity to push back or kind of like regulate AI a little bit so we can delineate what is acceptable or not. So, reflection is a kind of like primary artifact from students. I can’t use the ones that I’m doing currently (for research) because I haven’t quite gotten IRB approval and such, but in a future version of this class, I’m hoping to actually be able to use that as a primary source. I think there’s a bit of concessions between what I’m looking for as a pedagog and what I’m looking for as a researcher and I think that basically involves things such as trying to really see how their writing might have been transformed with the use of these new tools. Especially to what parts of writing they struggle with, how AI might have affected that, or whether it helped them a lot or not at all. Along those lines. Does that answer the question? 

D: You answered the questions and a bit of the next one, actually. I was going to ask you about some of the challenges that you encounter with your research process as well. You mentioned needing IRB approval in some cases, in other cases you’re using somewhat untraditional sources, so you’ve hit on that in a couple of your answers to other questions and it seems like those are some of your basic challenges.  

F: Yeah, yeah. Something I can add to that is that, research-wise, there is a human aspect of it that is a bit of a double-edged sword. I think it really helps create much more novel, much more interesting, much more revealing takeaways and conclusions from the research, but on the other hand, it does mean that I, in some way, place the burden of research not entirely on myself, but also onto students. That has some degree of control from me. I guess in some a sense that a lot of times like that involves relying on there being enough students to cooperate, students who are willing to participate in the research, getting the right class schedules that I’m able to conduct the research in the first place. There are a lot more moving parts that might prevent or extend the research.  

D: So, if there was something that was available to you, that would give you the opportunity to engage with similar reflections in some way, like the thought processes that students have as they move through their writing process, would that be something that would be valuable to you? 

F: I think it might be one of those areas where my research can be refined further on my end. My reaction is that I would think that it’s helpful, but maybe not completely pertinent by virtue of I’m really looking for reflections of students who are working with AI specifically. But on the other hand, there is merit to seeing how students without the tools kind of think about like, the writing, of course. That’s a bit that has its own kind of pit falls and fallacies to kind of compare them one to one, which is why I’ve been shying away from really having like a comparative approach to my research. A little bit of that might go a long way to seeing how students might work with or deal with certain aspects like drafting, researching, when they’re using AI compared to when they’re not.  

D: That is helpful! And I appreciate the distinction that you’re making there as well, that the focal point of the reflection or narrative is extremely crucial to your research. I think we’re probably a bit of a ways off of people actually looking at AI as part of their literacy… They’ve written about digital literacy but it’s in a limited capacity that they’ve discussed how AI is actually implemented in literacy practices; that and whether or not we’re ever going to view the use of AI as active literacy. Moving on…. 

D: If you did have something available to you that was similar to Jstor or EBSCO, what are some of the positive and negative aspects of those services that immediately come to mind? 

F: I’ve always been a fan of those platforms. I think, like, they’ve always been helpful. I like the robust search features, between just, you know, filtering the dates, filtering logical operators, that sort of thing. On the other hand, I think there is kind of like an academic language to it that I think like, makes it like, not the most intuitive to use for people who are not yet part of this community. In a sense, I think a common frustration I experience is that I want to do research about a topic that I’m not very well versed in, but that is adjacent to what I’m trying to work with. Which is why I’m trying to learn more about it, but as someone who is not a scholar of that field, I am not really familiar with the proper terminologies or keywords, relevant figures in the field, that makes it a lot more difficult to really find what I’m looking for. I know what I’m looking for is out there, because I’ll be shocked if no one has written about this before, but I can’t seem to find it by virtue of I’m not really sure what keywords they are gonna be using. For an example of this, my research is kind of using a lot of examples of classroom-based research on writing processes, which I understand there aren’t that many present. There should be those out there, but I’m struggling to even find things along those lines by virtue of not really knowing what the right terms are. 

D: So if something like Jstor or similar databases had a section of their website specifically dedicated to this, here are the names that need to know and some seminal works that have been produced.  

F: It will be valuable, but maybe more of a compliment that’s independent of the database proper. I don’t really have an answer to that. I don’t think it’s one that I can quickly point to and go, this is how we solve this problem because I think it is complex, right?  

D: Yeah, I just ask because it’s an interesting crossover with an earlier user testing assignment for this class. I had one of my roommates, a neuroscience M.A. who’s currently researching dyslexia look at the DALN and they struggled with bridging the gap between that foundation and the terminology of literacy studies. But that’s pretty much it for the questions I had prepared, do you have any questions or anything you’d like to add? 

F: No not really, I think we’ve had this sort of conversation in the past a little bit before, so I’m reasonably familiar with the DALN. No questions at the moment, yeah.  

D: Well, thanks so much for your time! See you around. 

F: Good luck with stuff.  

 

Reflecting on the Interview

This interview gave me a lot more valuable information than I had originally planned on. It completely outdid my expectations in respect to the goals I had for it, giving me additional information to cultivate a more accurate persona of graduate student researchers, some of the challenges that they face in approaching new disciplines, a common source of primary sources for pedagogical research for graduate student researchers, how the DALN could enhance users’ understanding of literacy studies terminology and core theories, and most importantly some basic wants/needs of those in this persona in relation to their preferred databases. I’ve done some interviewing in the past as part of other research projects, but most of the time those experiences were focused on gaining an understanding from the interviewee’s specialized knowledge or experiences. The difference that I noticed here is that while the interview provided all those great outcomes, it was still remarkably opinion based – the responses to questions were open ended and provided room for interpretation and analysis after the fact, let alone the variety of additional considerations those outcomes have for ongoing development and research. Suffice to say that a singular interview is not nearly enough to gain a full understanding of what I need to do effective user testing, or even to effectively design a user experience case study. It’s much more likely that I’ll need to interview someone from each persona, record overlaps, and try to implement testing methods that reinforce overlapping goals and motivations for using the site. 

In terms of what I learned about interviewing itself, the biggest takeaway was that building rapport with your interviewee and it can be vital to a successful interview that natural conversation is allowed to happen. Many of the key takeaways from the interview came not from the direct answers offered by Fikko, but from the conversation that evolved around them. While the conversations were brief (I didn’t necessarily want to spend hours transcribing, I’ve had to do that for a past end of semester project and it was grueling), they were the primary method of gaining clarification, bridging gaps, providing explanation and rationales for certain questions, and more. Without building a positive rapport early on (it helps that I’m already friends with Fikko) I’m sure that effective conversation would be nigh impossible, similar to what you can see in talk show interviews where the host ruins their rapport at the start and it’s just 20 minutes of awkward conversation. Regardless, this was an exceptionally valuable experience and provided amazing takeaways for not only interview skills, but also the ensuing project I’ll be working on for the remainder of the semester. 

Feb 24

Background on Future Case Study

The case study that I’ve been working out over the past few weeks has been centered around understanding varying user experiences in the video game World of Warcraft. Similar to how the CDC used the game to better understand human behavior during a pandemic (see corrupted blood incident), this case study aims not at improving game play for a particular group but to extrapolate data from the study to how future user experience research might address diverse audiences in terms of their ability to engage with varying difficulties of engagement. The hope is that in extrapolating data from the case study in this way, it can be used to improve methodologies of user experience research across other types of applications. 

Known Personas

Since World of Warcraft is an open-world game with a non-linear storytelling design (you are not required to engage with any specific piece of content, essentially letting you craft your own narrative beyond the traditional narrative created by the game’s developers), there’s a number of different ways that people play the game that are obvious at first glance and even more if you spend time getting to know the community a bit more. For this case study, I’ve brainstormed a few personas from my own experiences playing the game with the following section focusing on a survey that I may employ to get a better understanding of the nuances within these persona categories and to identify and personas I might have missed. It’s worth noting that these personas may have crossover, players engage with multiple forms of content and as a result they’re likely to occupy multiple personas. This is especially true in cases where a player commits more time to the game and thus needs to engage in additional content to fill that time. 

“Casual” Player

This is more of a distinction than an independent category for a persona, and the same goes for the “hardcore” player. A casual player is oftentimes a player that either doesn’t have a lot of time to play the game or doesn’t want to put most of their time playing video games into this one in particular. These players usually are seen as players that engage with content at a surface level (easier difficulty content, or limit their engagement to a singular type of content) and are generally seen as the player base by being worse at the game in comparison to more invested players. Despite this, these players make up the vast majority of the community.

Because these players engage with content at a lesser level than the more invested players, they also engage with information about the game in different ways. There are a variety of fan-sites dedicated to sharing information about World of Warcraft, such as wowhead, warcraft logs, raidbots, Youtube and Twitch pages, raider.io, and more depending on the specific forms of content you’re engaged in. The casual player is assumed to engage with these sites less than more dedicated players, however survey data must be acquired to see how a casual player engages with these sites in ways that might differ from “hardcore” players. 

“Hardcore” Player

The “hardcore” player, as alluded to, differs from the “casual” players through their time dedicated to the game. These players tend to engage with more information found outside of the game, tend to engage in content that increases player power regardless of where that player power comes from (more likely to engage in multiple content types), and likely engage in one content type at a higher level than the average player. In other cases, player power is less of a concern and instead these players seek to demonstrate their ability with the game at higher levels, either through completing difficult limited time content or competing with other players. Websites such as raider.io and warcraftlogs maintain dynamic leaderboards measuring player performance in different types of content and these are often the outlets where these players compete rather than within the game itself. 

A different type of player within this category may be the “professional” player, a player who completes content at the highest level or competes with others in player-versus-player (pvp) content at the highest level. These offshoots of the “hardcore” player are usually sponsored by an esports organization and generate content that “hardcore” players use to improve their own play. These players are unique to the casual > hardcore > professional dynamic in that they are usually limited to three activities rather than the full scope of what the game offers, raiding, dungeoneering, and pvp. 

New Player

New players fall outside of the casual – hardcore player dynamic in that these players usually are working their way up to max level, a prerequisite for engaging with other forms of content that the game has to offer. For my user experience research, this is a category that I’m interested in specifically because of the data that they can offer regarding their ability to acclimate to new experiences. While the average new player experience is oriented around a slow introduction to game concepts through leveling, user testing for this group will look different for me due to time constraints. Instead of focusing on implementing long term testing, supplemental programs and interviews will be used to gauge what their experiences might be like with the game itself.

Raider

The following classifications are based around the types of content that users engage with. While these categorizations are more focused on the primary form of content the user engages with, there are circumstances where certain users will cross over into other areas of the game for various reasons.

Raiders are players that tend to focus on late-game content that tasks players with building a substantial group (10-30 players) to complete extended boss fights. These players most often join or create guilds, large groups of players to form groups from, in order to complete such content. Raids are semi-repeatable content – each boss can only be killed once per week, per difficulty level, making this a player type that engages with their primary form of content less often than they might with other forms of content that increase player power. A common crossover is between raiders and dungeoneers. Raiders will complete dungeon content in order to obtain items that will help them in raids, but usually at a limited level of difficulty as opposed to how dungeoneers engage with the same content. 

Dungeoneer

Dungeoneers engage most often with Mythic + dungeons, dungeons with scaling difficulty which increase in level when completed within a certain time limit. They exhibit similar behaviors to raiders in that they will complete raid content to get items that help them in dungeons, but this shouldn’t be seen as their primary form of engagement with the game. The most important aspect of this player type is that they’re engaging in repeatable content – they can complete one dungeon and repeat it immediately after if they want without any penalty. Because of this, they may invest more time into the game, but further surveying or interviews would be needed to know for sure. 

Collector

Collectors’ focus is less on completing difficult content and more on collecting various things within the game. Mounts, toys, pets, armor appearances, achievements, titles, etc. are all potential things that these players pursue in the game. These players have varied engagement with others, since most collectables can be obtained independent of content that involves other players, but more experienced collectors will crossover into every other player category due to unique obtainable items from raids, high level dungeons, pvp, or that require extremely high amounts of in-game currency. Despite there being a finite number of collectables, due to the game’s expansive backlog of content there’s almost always something that these players can pursue. While dungeoneering and raiding has some control over what can be completed if you have a consistent group, collecting items is often at the mercy of the game’s internal random number generator with some collectors spending multiple years trying to get one item.

PVPer

PVP in World of Warcraft takes on the form of 4 main game types: normal battlegrounds (team-based game modes such as capture the flag and king of the hill without the need for a premade group), solo-shuffle arenas (3v3 arena matches without needing a premade team), rated arenas (2v2 or 3v3 arena battles with premade groups where users compete for leaderboard positions), and rated battlegrounds (10v10 team-based game modes where users compete for leaderboard positions). Despite the variety of game modes to choose from, most pvpers will choose one type of game mode to pursue. While other user categories will engage with other content types, pvpers have a different progression of player power that is solely within pvp, meaning that they’re the one player group that oftentimes will not engage with those other types of content, or if they do in a limited capacity.

Gold Farmer

Gold farmers have the sole goal of accumulating wealth within the game. These players will flip items on the market, sell carries through mid-high level content, and provide loans in some cases to professional players (most often professional raiders). While pvpers are isolated due to their choice of content engagement, gold farmers are more isolated due to the lack of need for other players to complete content with. As a result, these users will be more difficult to test for and their behavior and play habits might need to be observed through other means than traditional user testing, surveying, or interviewing. 

Role-player

The last common player type is role players. These users engage primarily with the story that’s been crafted by the game’s developers through quests and other content types, and in many cases creating their own original characters that they place within the larger scope of the game’s universe. This play-type prioritizes player interaction on a more social level, rather than in an attempt to complete content. As such, these users should be observed based on their behavior in-game far more than how they perceive and overcome challenges – so documenting information regarding where, when, who, why, and how they engage with other players and possibly conducting interviews regarding their play habits since each will likely be unique. 

 

Feb 17

Why Surveys and Questionnaires? 

As UX researchers and designers, there’s a number of reasons why we might employ surveys and questionnaires (I will default to surveys throughout this post, but know that in many cases this discussion of surveys is interchangeable with questionnaires aside from the implications of calling it one or the other) to our audiences. These include, but aren’t limited to gaining a better understanding of how and why audiences might use our end product, current knowledge of product or similar products, how users feel about specific design elements, why users made certain decisions on how to use a product, what sorts of audiences are using the product or might use the product. Regardless of the questions asked, the end result is ideally an abundance of both qualitative and quantitative data that can inform next steps in development, dissemination, or approaches to similar projects in the future. Therein lies the purpose – these are yet another set of tools to add to the arsenal of UX researchers and designers that can be used to better understand how users interact with what we’ve created. While these are valuable tools, they come with their own sets of considerations, shortcomings, and idiosyncrasies that we will have to be aware of to use them effectively. 

Creating Surveys

The methods of creating surveys is remarkably similar to how one might fashion interview questions or research questions in an academic setting. Depending on what you’re seeking to uncover through your survey, qualitative or quantitative, simple or complex responses, unique or prewritten responses, etc. your questions will be vastly different. For example, if you’re looking to aggregate quantitative data you might focus on questions related to user demographics stemming from prewritten responses, or if you want to demonstrate differences in your participants you might leave each question open ended with a place to input their own responses, or you might even wish to check if participants are paying attention to a longer survey by having questions that could potentially contradict each other, in each of these circumstances your questions will undoubtedly shift and change based on your intentions as the creator of the survey. Once you’ve figured out exactly what you’re looking for with your survey, you can then move on to drafting questions and choosing a platform/medium. 

Your questions are the backbone of your surveys, and as such they need to be carefully considered throughout each step of the survey development process. As you draft your questions, you might consider creating them under the pretenses of specific aspects of design or testing that will garner more specific, actionable responses based on the users’ experiences. One way of doing this is to set up range-based responses that aim to see how much a user agrees or disagrees with a set of statements, ask them about whether they had a positive or negative experience with a specific design decision that the design team is concerned with, provide them with text boxes were they can offer additional insights into their choices, etc. These questions help to establish quantitative information (how many users agreed with a statement vs disagreed with it), as well as qualitative (I agree because…). However, in setting up these questions, there are a number of considerations that must be made to ensure that you’re not damaging your data set while adding to it. 

Regardless of the data you’re looking to collect, your goal as the researcher or designer will be to ask questions that are free of potential biases, preconceived notions of your product, unclear directions, lead users to specific answers, and superfluous words. By focusing on removing these characteristics from your potential questions, you should be able to maintain untainted data from your surveys. It’s for this reason that it may be fruitful to spend time testing your survey on others working on the same project, or to go through multiple passes of revision to verify that your questions are free of potential issues, to identify users’ prior experiences with your product or similar products, to prune your questions until they are as concise as possible, and to offer questions that your participants will/can only give viable responses to. As much as you can reduce these potential issues, the more valuable your data will be in making future decisions. 

Pre/Post-test Surveys?

Once you’ve planned out your questions, you must consider whether or not your study will benefit from pre-testing and post-testing, or just one over the other. They both have their benefits and constraints, but the most pressing concern is in whether or not participants will become fatigued and stop answering. For this reason, if you plan on using both, the pre-test survey should only focus on the information that you absolutely need before starting your user testing. You might collect information regarding users’ perceptions of their ability to use the product you’re about to test with them, demographic data which can potentially help inform the testing process, and information about why they might use the product, however all of this information pales in comparison to what you can gain from post-test surveys. With post-test surveys, you have the opportunity, as I’ve said a bit before, to focus your questions on specific design elements of the product that can then be used to gain deeper understandings of what needs to be changed or left alone. Finally, though, after all of these considerations you’re ready to disseminate your survey and collect your data. 

Understanding and Communicating Survey Data

After collecting responses, you have to communicate the newfound data to your stakeholders in a meaningful way, often through the use of tables, graphs, and quotes. However, there’s instances where the data collected itself is subject to additional scrutiny to better understand whether or not it’s valuable. It’s tempting to say, almost immediately, that once a survey is complete that you now have valuable data, however there are ways of checking this against the tests that you’ve completed as well. What you will want to pay attention to most is correlation and causation relationships between data points, questions that have massively skewed responses without an associated causal relationship (may indicate that a question is bad), an overabundance of positive or negative responses with no outlier and if there is an outlier that they be interviewed to understand their perspective, be wary of automated graphs that may make differences in data points seem larger or smaller than they actually are, and always double check your math to verify that any statistics are properly calculated. If any of these issues with your data show up, it’s worth taking the time to survey a second set of users to corroborate or negate the original survey’s responses, though you should limit changes between surveys and testing periods to individual variables if possible to isolate where the issue may have arisen. 

Once you’re done testing, verifying your data, and corroborating your data, then you can start thinking about how best to communicate it. Most importantly, you need to consider how your data will look in different formats. A line graph that displays a positive or negative trajectory works well in a graph because the data supports that trend – the same somewhat works for stagnating data that indicates no changes over time, a comparative bar graph works well to show differences in raw sums, and a pie graph works well to display percentages, but if you have a lot of qualitative data and instead want to quantify recurring words and phrases you might create a table using data aggregated from a program like NVivo. Needless to say, just as there’s a multitude of ways to create your surveys, there’s also a multitude of ways to take the ensuing data and communicate it to your stakeholders. Regardless of the method you choose to communicate it, you need to consider the benefits and constraints of each possible method and ensure clarity above all else. 

 

Feb 10

The Digital Archive of Literacy Narratives

For this week’s assignment I wanted to do a bit of user testing on the Digital Archive of Literacy Narratives (DALN) archive and blog to determine how it might be changed to improve user experience. The sites serve as a way for users to investigate phenomena associated with literacy development and practice through user uploaded narratives of their experiences with literacy, or in other cases to see developments, events, and how others are using the archive in a variety of contexts. The DALN is currently in the process of undergoing a new cycle of development to enhance user experiences and infrastructure to ensure the longevity of the project. So, to reconfirm what has been discussed by personnel working on the project, I wanted to see if a user new to the archive could make good use of the site based on its current functionality. To this end, I enlisted one of my roommates, a graduate from the neuroscience program at GSU who engages in lab-based research and isn’t familiar with archives to see how they would approach two common tasks that that sites are used for. 

Conducting the Test

I split this 10 minute foray into user testing by splitting the test into two parts: a test to determine how a user would find a literacy narrative about experiences in elementary school as the topic of the narrative, and another asking them to find examples or resources of how to integrate the archive into a class on literacy. I gave them these basic instructions without much additional guidance or explanation of what a literacy narrative was, how to navigate the site, or a specific end goal for the testing. The goal of reducing frontloaded information was to see how a new user would navigate the site independent of guidance. They were able to eventually find what they were looking for, but they struggled to identify which narratives worked perfectly to satisfy the goal of finding a narrative regarding elementary school. It seemed to mostly be their own limitation on what they thought would work (there were a number of narratives that appeared during their search that were related to the topic, but none that were specifically titled with “Literacy Narrative.”) This similarly caused issues when they were trying to find a blog post that was related to classroom utilization, wherein they searched for specific phrasing in the titles of the posts that would make it abundantly clear that it was within the parameters of what I was looking for in the test. After we finished testing, I asked them a few questions to see what they thought of the site and if they had any recommendations:

  1. What was a challenge in finding content related to what I asked you to find?
  2. Was there anything you would have liked clarified before testing started?
  3. How could the search functionality better support your end goal?

It was from these questions that it was easier to understand their thought processes during the test and explained some of the difficulties that I saw when I watched back over the screen recording. 

 

Identifying Issues

In this section I want to discuss some of the issues that arose from the websites’ design, or in some cases issues with the testing. While the test overall felt successful in confirming some of the issues that the DALN development team has already identified as being key areas of future development, it also elucidated some other considerations that we will likely need to make as we move forward. 

With the Site

The main issue that this test showed regarding both sites is a lack of organization – both in the archive and in the blog. While the user was able to find examples of what they were tasked to look for, they struggled to identify what would have been the perfect fit for what I had told them to find. As I asked them the follow up questions, they expressed difficulties that were encountered because the search functionality was limited compared to the databases that they had used in the past. The current search is entirely key-word based, however each narrative lacks distinct keywords to associate them with already known phenomena such as literacy practice, literacy development, literacy sponsorship, TESOL, etc. This seemed to cause issues for the participant as they were trying to identify which narrative was the best example for the test, similar to issues a student or researcher might have in trying to find a narrative for their own projects. 

As we moved onto the blog, similar challenges arose due to the blog being organized chronologically. The user navigated the blog searching for a post that related to classroom application which appeared more explicitly on the second page in the title of one of the blog posts, but it leaves me wondering if there could be a better method of organizing the blog posts in the future. If things are organized by overarching topics, it may be possible to make certain examples easier to find while also creating a more organized history of the archive/project’s development. Currently, blog posts are related to examples of classroom application, introduction of new personnel, updates on web development, and announcements regarding conferences and other events that the DALN will be at. If these were organized along these lines, we could continue testing to determine what a more frequent user would be looking for in their experience and cater to those needs more closely. 

With the Test

The test went mostly well, however it became clear to me how different participants will need different levels of instruction. If my roommate was more familiar with archival research, it might have been possible to complete this test without any guidance, however I didn’t consider what additional information my participant might need before starting. This was especially true in instances where I used terminology that they weren’t familiar with. When I mentioned literacy narratives, they didn’t ask for clarification, but when I asked if they would have liked any clarification post-test they acknowledged that a bit of explanation of what the DALN is for and what literacy narratives were could have been helpful to them in their search. In the future, I think I will need to provide at least some guidance or background information for users, rather than seeing how they approach the test based on instruction alone. 

Reflection

This initial testing experience was remarkably helpful. I found that what I think of user testing needs to meet in the middle ground with users’ own experiences with similar applications and media. In cases where they might be completely new to something, I believe that some background information may be needed for more complex tests or in cases where the app is designed around specialized knowledge (such as an archive on a specific research topic). To this end, I’ll likely be spending more time reconsidering my participants for future tests, focusing on how much they might already know about how to use an application and then from there develop tests that are suitable to their skill levels. While this might make testing easier for participants, I’m left wondering how much information you can realistically give participants without invalidating the results of user testing….