Post-Fieldwork: Back in the UCLA saddle
- Mari
- Apr 8, 2016
- 3 min read
It has been great to be back at UCLA, now with a full quarter of teaching, data analysis, and dissertation writing (disertando, por favor) under my belt.

I spent January writing up my research methods chapter and have turned it in to my readers for some feedback; I look forward to seeing what they have to say! I figured that since that stuff was very fresh in my mind (in that chapter, I talk a lot about the selection of my neighborhoods, describe them, talk about how I recruited participants and what we actually did during the data elicitation, etc.), it would behoove me to get it down on paper ASAP.
In February I transitioned into a bit of data analysis, analyzing the results of the control perception task. I was feeling a little rusty in Praat (as I was segmenting the 90 speech stimuli to run through VoiceSauce, to obtain acoustic measures such as breathiness [H1-H2, H1-A1]), and suuuuper rusty in R, so there was a bit of a learning curve, but I was able to present the results of my data analysis at the UCLA phonetics seminar on Leap Day and get some great feedback from the members of the audience (Thanks!). They gave me some great suggestions, which I hope to incorporate into the analysis of my bigger perception task (more on that below).
In March I transitioned again to preparing for a coauthored presentation with my friend and colleague Franny Brogan at the 8th International Workshop on Spanish Sociolinguistics, in San Juan, Puerto Rico (which is next week, already)! For this presentation, we segmented and measured /s/ in one of the production tasks that we’d both done during our respective fieldwork (mine in Santiago, and Franny’s in El Salvador). Again, there was a bit of a learning curve, but both of us will be doing this for our dissertation data analysis when we analyze /s/ production in the sociolinguistic interviews, so it’s been good practice! I look forward to feedback on this presentation that we can then include in the write-up.
Some wins:
~ One of the things that has been vexing me is how to know which participants’ results to include in my main perception task analysis. A total of 63 people participated in this experiment, but not everyone completed the task correctly, according to their responses to items that everyone should have gotten 100% correct. So, there’s a range of 100% correct to about 30% correct, and I wasn’t sure where to draw the cutoff. Thanks so much to Kie Zuraw for her comments and suggestions. What I eventually did is use both % correct on these ‘screener’ items as well as the participants’ d’ (dee-prime) scores to find the ‘sweet spot’—the amount of participants I could include to keep my numbers up, but also so that the data wouldn’t be too noisy. As it happens, there was a significant difference in d’ scores (according to unpaired t-tests) between the participants who performed at >78% on the screeners and those who performed below that cutoff, leaving me a total N of 38 participants! Huzzah!
~ I felt pretty rusty in R in February, I’m thankful to Megha Sundara for lighting a fire under me to get cranking on becoming more proficient in R—also thankful to the dplyr & GGplot2 packages and RStudio!
Thanks for reading—I hope to include more regular updates as I wade through my data, which can hopefully be helpful to others in similar situations!
Comentários