Time For Some Updates

I have been a member of the SIF family now for a little more than two months, so I guess it’s about time to go through all of my projects for this semester and give you updates on each.

1) Hybrid Pedagogy Promotional Videos

This project has been a straightforward one from the beginning. The general idea is to record a series of interviews with faculty members who are experienced with a teaching that blends online and offline (i.e. in-class) activities–in other words, hybrid teaching. For this, we have come up with a set of focused interview questions, and over the course of the first 6 weeks we have conducted various interviews with faculty members. We have been able to compile a great amount of material to work with. We are currently in the post-production phase of this project, or should I say the first stage of this project since we believe that promoting hybrid teaching should also be considered from the students’ perspectives. To that end, we are planning to conduct more interviews with students in the course of the next semester in order to balance the information we have so far received from faculty. I am currently in the process of learning Adobe Premiere Pro so that I am also able to help out with the post-production process.

2) Outreach and Documenting

Similar to the project above, this one is also clearly situated in the world of promotion. The basic premise of this project is to promote places at GSU where students have the opportunity to access, use, or check out technology devices. Find out more by reading this great post from my colleague, Amber. Granted, the GSU website already provides a great amount of information regarding those places, but we came to the realization that they didn’t really showcased these spaces “in action”. So, during our early group meetings we noted all of the technology sites that are currently available at GSU, and quickly honed in on the Digital Aquarium, the Aderhold Learning Lab, and the Exchange. For each space, we had planned to shoot short, 1-minute videos that highlighted less what these spaces offer, but more how students might be able to use available devices. Unfortunately, as this idea began to take shape, we learned that each of these spaces is going do undergo major design changes, so any video we recorded would have had a pretty short life span because they would have all featured each space in its current state. That caused us to go back to the drawing board. Now, we will be focusing solely on the new CURVE space in the library in order to give GSU faculty incentives to assign activities in their classes that would make their students come to CURVE.

3) 3D World and Gaming Environment

This project is quite unique. The basic idea is to virtually re-create a city-block in Atlanta, the one where Classroom South is located to be precise, and display how this block might have looked like in the 1930s. Check out this great post by my colleague, Robert, to learn more about the virtual environment we are creating. In addition, we are planning to populate that space with objects and characters that students can interact with and learn more about the history of Atlanta. Furthermore, we also hope to have writing students create narratives and stories that further help to shape this virtual environment. Throughout the first couple of weeks into this project, I have been mostly involved with trying to consult archival sources such as photographs and newspaper articles to help our production team design the space through a gaming engine called “Unity”. We have now, however, reached out to other teams at Emory, which have also been working on a similar mapping project in order to combine our resources and see how we can help one another. My responsibility now is to facilitate that discussion and further help adding content to the virtual environment.

4) Deliberation Mapping Tool

For this project, we are currently in the conceptual design stage. To give you a general idea of what this project is about, I want to refer to this great post by my SIF colleague Nathan: “Deliberation Mapping – Shaping Online Discussion“. Over the course of the last two weeks, we had some great meeting sessions, to which Siva and Ram have wrote engaging blog posts: “Integration and Finalization!!!” and “Where is the big picture?” The situation we are presently dealing with is how the different ways a user participates in a deliberation are represented visually. Below are some impressions from today’s meeting:

IMG_2094

Justin giving directions.

 

IMG_2102

Figuring out participation parameters.

At this stage, our main goal is two-fold: we need to find ways of facilitating ease of use both for students and their instructors  as well as to come up with ideas on how to avoid asynchronous deliberations of becoming messy from a visual perspective.

5) Data Visualization Workshop for Research Purposes:

This is a project that emerged in the course of October. At the beginning of the semester I had been tasked with a project to create a software tool that would help a researcher visualize vocal parameters such as volume, pitch, and timber. Fortunately, I have been able to help guide the researcher to various audio production programs and tools that already offer those kinds of visualizations. So, once that project was completed, I created the “data visualization workshop” project together with Justin and Joe. The basic premise is to offer innovative ways for students and researchers to evaluate research results that they retrieve from academic databases. Oftentimes, when we access the GSU library to search for sources, we type in keywords and receive long lists of results. What if we had a way to transport those results into a visual environment and easily identify how the search relates to, let’s say, publication venues, its use in research studies over time, the kinds of disciplines that do the most research on the search term, especially when it’s a topic that is oftentimes evaluated in interdisciplinary ways. Translating my findings into a workshop was the logical conclusion. However, in order to determine the kinds of programs that are necessary to visualize database research results, I first need to identify how to best export the search results from the database. In the course of the next week, I am planning to have meetings with database experts at the GSU library regarding this issue. Once I know what’s possible, I can move further with this project.

 

I feel very fortunate to be part of the SIF team. I have already learned a lot and I am eager to see how all of these projects will turn out. That’s all for now.

Cheers,

Thomas

Let Me Visualize Your Words, and I’ll Tell You Who You Are

Consider the following everyday life situation: you’ve bought a defective item, and now you are discussing return policies with a customer service agent over the phone…then maybe you are not discussing return policies at all but you want to place an order over the phone. In any event, I dare you to ask yourself if you’ve ever wondered what the agent on the line might actually look like? I am sure that we’ve all done that at one point or the other, and I am also speculating that 9 out of 10 times our intuition would fail us and the agent on the other line doesn’t look anything like the image we crafted of him or her in our minds. Now, why might this be worth noting? The inference that we can draw is that there are certain cues embedded in the human voice which, when all we have is sound, motivate us to craft an idea of the speaker in our heads. Moreover, not only do we imagine physical attributes, we also equip the voice with certain characteristics that the speaker presumably has. Succinctly put, when we only have the sound of a voice available, we are often tempted to fill in the blanks of the speaker’s personality. And this leads us to the well-grounded assumption that the human voice is shaped by the relative relationship of various parameters such as

pitch, tone, timbre, rhythm, inflection, and emphasis among others,

which—during the act of listening—leave us with impressions about the speaker that go beyond content and context as well as the mere level of sound. The only problem is: the human voice is a fleeting thing, which makes it virtually impossible to measure and analyze the interplay between all of those parameters in real-time. Not so with pre-recorded speech, of course, which provides researchers with a potent avenue to capture and visualize the dynamic interplay of parameters that shape the human voice, and the way it is perceived.

To be honest, however, up until a week ago, that topic had never really crossed my mind. But then I had my first couple of SIF meetings, and I am now working on a project designed to find (better), more innovative ways of visualizing prerecorded voices along a set of specific parameters, enabling researchers to export parameter-related data for quantitative as well as qualitative analysis. How fascinating is that?!

I have put “better” in parentheses here for a reason because there are already quite a number of programs available that do just that.

Rosebud

What you see above is a screenshot I have taken with the free software, Praat (click the link to download). Whit this little tool, one can not only record mono and stereo sounds, one can also load pre-recorded sounds in order to visualize a couple of the kinds of parameters that I’ve listed above. The sound I have chosen for this example is a word, and it’s probably one of the most enigmatic utterances in cinematic history: “Rosebud” from the movie Citizen Kane  (1941). The main character, Charles Foster Kane, utters this word with his last breath, and throughout the movie, audiences ponder not only the meaning of but also Kane’s relationship to the word. I don’t want to give away any spoilers because it’s such a great movie, but what I can reveal is that Kane is fond of what “Rosebud” refers to. Now, moving back to the image, what if there was a way to visualize vocal parameters in such a way as to draw relatively accurate inferences about the emotional quality of different types of utterances.

What we can visualize with Praat, first and foremost, is a waveform in the upper half, and the amplitudes here tell us something about the volume and the emphasis of the utterance. Where things become more interesting, however, is when we look at the lower half of the image. Here, we have access to visualization of certain sound-related parameters such as pitch (marked in blue) and intensity (marked in yellow). Applications like Praat are commonly used in the field of speech therapy.

Current research on sound visualization is trying to capture vocal expression on a quantitative as well as qualitative basis. Here is just one interesting paper on the topic in which the authors are “particularly interested in the paralingual (pich, rate, volume, quality, etc.), phonetic (sound content), and prosodic (rhythm, emphasis, and intonation) qualities of voice” (157): “Sonic Shapes: Visualizing Vocal Expression.”

And this is one of the projects that I’m going to work on this semester. Again, up until a week ago, those questions really hadn’t crossed my mind. But this is what it means to be a SIF fellow at Georgia State, I guess. Not only do you get to start working with a group of very smart people, you are also being confronted with new things, new questions, and new ways of seeing. And if all goes well, you eventually start looking for even newer and more exciting things yourself that you’re eager to share with others.

Speaking of sharing, I want to make it a habit of always ending a post with something worth checking out. So, if you haven’t seen Citizen Kane, yet, then by all means, do so!