The Digital Archive of Literacy Narratives
For this week’s assignment I wanted to do a bit of user testing on the Digital Archive of Literacy Narratives (DALN) archive and blog to determine how it might be changed to improve user experience. The sites serve as a way for users to investigate phenomena associated with literacy development and practice through user uploaded narratives of their experiences with literacy, or in other cases to see developments, events, and how others are using the archive in a variety of contexts. The DALN is currently in the process of undergoing a new cycle of development to enhance user experiences and infrastructure to ensure the longevity of the project. So, to reconfirm what has been discussed by personnel working on the project, I wanted to see if a user new to the archive could make good use of the site based on its current functionality. To this end, I enlisted one of my roommates, a graduate from the neuroscience program at GSU who engages in lab-based research and isn’t familiar with archives to see how they would approach two common tasks that that sites are used for.
Conducting the Test
I split this 10 minute foray into user testing by splitting the test into two parts: a test to determine how a user would find a literacy narrative about experiences in elementary school as the topic of the narrative, and another asking them to find examples or resources of how to integrate the archive into a class on literacy. I gave them these basic instructions without much additional guidance or explanation of what a literacy narrative was, how to navigate the site, or a specific end goal for the testing. The goal of reducing frontloaded information was to see how a new user would navigate the site independent of guidance. They were able to eventually find what they were looking for, but they struggled to identify which narratives worked perfectly to satisfy the goal of finding a narrative regarding elementary school. It seemed to mostly be their own limitation on what they thought would work (there were a number of narratives that appeared during their search that were related to the topic, but none that were specifically titled with “Literacy Narrative.”) This similarly caused issues when they were trying to find a blog post that was related to classroom utilization, wherein they searched for specific phrasing in the titles of the posts that would make it abundantly clear that it was within the parameters of what I was looking for in the test. After we finished testing, I asked them a few questions to see what they thought of the site and if they had any recommendations:
- What was a challenge in finding content related to what I asked you to find?
- Was there anything you would have liked clarified before testing started?
- How could the search functionality better support your end goal?
It was from these questions that it was easier to understand their thought processes during the test and explained some of the difficulties that I saw when I watched back over the screen recording.
Identifying Issues
In this section I want to discuss some of the issues that arose from the websites’ design, or in some cases issues with the testing. While the test overall felt successful in confirming some of the issues that the DALN development team has already identified as being key areas of future development, it also elucidated some other considerations that we will likely need to make as we move forward.
With the Site
The main issue that this test showed regarding both sites is a lack of organization – both in the archive and in the blog. While the user was able to find examples of what they were tasked to look for, they struggled to identify what would have been the perfect fit for what I had told them to find. As I asked them the follow up questions, they expressed difficulties that were encountered because the search functionality was limited compared to the databases that they had used in the past. The current search is entirely key-word based, however each narrative lacks distinct keywords to associate them with already known phenomena such as literacy practice, literacy development, literacy sponsorship, TESOL, etc. This seemed to cause issues for the participant as they were trying to identify which narrative was the best example for the test, similar to issues a student or researcher might have in trying to find a narrative for their own projects.
As we moved onto the blog, similar challenges arose due to the blog being organized chronologically. The user navigated the blog searching for a post that related to classroom application which appeared more explicitly on the second page in the title of one of the blog posts, but it leaves me wondering if there could be a better method of organizing the blog posts in the future. If things are organized by overarching topics, it may be possible to make certain examples easier to find while also creating a more organized history of the archive/project’s development. Currently, blog posts are related to examples of classroom application, introduction of new personnel, updates on web development, and announcements regarding conferences and other events that the DALN will be at. If these were organized along these lines, we could continue testing to determine what a more frequent user would be looking for in their experience and cater to those needs more closely.
With the Test
The test went mostly well, however it became clear to me how different participants will need different levels of instruction. If my roommate was more familiar with archival research, it might have been possible to complete this test without any guidance, however I didn’t consider what additional information my participant might need before starting. This was especially true in instances where I used terminology that they weren’t familiar with. When I mentioned literacy narratives, they didn’t ask for clarification, but when I asked if they would have liked any clarification post-test they acknowledged that a bit of explanation of what the DALN is for and what literacy narratives were could have been helpful to them in their search. In the future, I think I will need to provide at least some guidance or background information for users, rather than seeing how they approach the test based on instruction alone.
Reflection
This initial testing experience was remarkably helpful. I found that what I think of user testing needs to meet in the middle ground with users’ own experiences with similar applications and media. In cases where they might be completely new to something, I believe that some background information may be needed for more complex tests or in cases where the app is designed around specialized knowledge (such as an archive on a specific research topic). To this end, I’ll likely be spending more time reconsidering my participants for future tests, focusing on how much they might already know about how to use an application and then from there develop tests that are suitable to their skill levels. While this might make testing easier for participants, I’m left wondering how much information you can realistically give participants without invalidating the results of user testing….