The Effect of Distributed Mentoring on Fanfiction


Figure (by Diana Zhang): The lexical diversity MTLD by number of reviews received by the author. The size of each circle indicates the amount of data in each group.

Human Centered Data Science Lab, January 2016 - Present

Collaborators: Dr. Cecilia Aragon, Dr. Sarah Evans, Jihyun Lee, Diana Zhang, Ruby Davis.

Synopsis: Online fanfiction repositories attract millions of writers and readers worldwide. The largest repository,, accumulated a rich corpus of about 61.5 billion words of fiction over twenty years, rivaling even the Google Books fiction corpus. This website is an important learning environment for young writers, as the site affords networked giving and receiving of feedback, termed distributed mentoring. One way to measure author learning is with the Measure of Textual Lexical Diversity (MTLD), which captures the author's range of vocabulary usage. In a massive longitudinal study of texts by 1.5 million authors, I implemented MTLD in Python and performed statistical analyses in R to find a correlative link between the cumulative number of reviews an author has received and the lexical diversity score of their writing.

Click here to download the full paper!

A Criteria-Based Approach to Feedback in Social Q&A

Social Q&A

Screenshot: An answer rating from one of the experimental conditions.

Prosocial Computing Lab, September 2016 - Present

Collaborators: Dr. Gary Hsieh, Dr. Erin Walker

Synopsis: Social Question and Answer (Q&A) websites offer users a place to post questions that are then answered by other users. Encouraging high-quality contributions to these sites can benefit askers, answerers, and others who use the answer archive. In this study, I examined the utility of providing crowdsourced feedback to answerers on a student-centered Q&A website, In an experiment with 55 Brainly answerers, I compared perceptions of the current 5-star rating system against two feedback designs that used three explicit criteria (Appropriate, Understandable and Generalizable). Contrary to the experimental hypotheses, the designs resulted in lower perceptions of utility. I investigated these results using interviews and derived a set of implications for the design of feedback for answerers in online Q&A.

Click here to download the full paper!

The Construction of Distributed Mentoring Circles


Figure (by Niharika Sharma): A connected mentorship network. The circles at each epicenter represent authors, while the connected circles clustered around them are reviewers. The color of each circle represents each user\s top fandom.

Human Centered Data Science Lab, September 2017 - Present

Collaborators: Dr. Cecilia Aragon, Dr. Sarah Evans, Ruby Davis, Niharika Sharma.

The concept of Dunbar's number -- a cognitive limit to the number of social relationships an individual can comfortably maintain -- has been well-studied in online and offline contexts. Analyses of twitter and facebook show that there are distinct levels of relationship investment, which differ between a person's smallest group of close friends and increasingly larger groups of friends and acquaintances. Do sites of distributed mentoring show a similar structure? We replicated Dunbar's methods on review metadata from Examining relationships from reviewers to authors, we consider Dunbar's number in digital contexts. Using clustering techniques implemented in Python, we discovered two to three relationship layers in the fanfiction dataset, shedding light on the structure of the distributed mentoring community. The results characterize the relationships that occur between writers and readers of fanfiction and may lead to more effective platforms for distributed mentoring.

From Do-It-Yourself to Do-It-Together: How People Share Knowledge in a Student-Run Bike Shop

Bike Repair

Photo (by Kai Lukoff): The front desk at the ASUW Bike Shop

ASUW Bike Shop, March - June 2017

Collaborators: Sarah Inman, Kai Lukoff

The Associated Students of the University of Washington (ASUW) bike shop is a hub for students and community members at the University of Washington (UW) who ride bikes. This student-run, nonprofit repair shop offers bike servicing, classes, and opportunities to work and volunteer to students and faculty. Setting it apart from most other repair shops, the ASUW bike shop encourages students to work on their own bikes by allowing them to use the space and the tools for free. Getting help and advice from the mechanics is also free -- as a result, the shop is a space where beginner and expert bike mechanics collaborate and share knowledge. Previous study of collaborative bike repair has shown that the loss of co-presence when collaboration occurs over a video call drastically reduces the efficacy of cooperation [6]. We expand the scope by showing how the site of repair has function that goes beyond providing co-presence during the collaboration and beyond the experience of a single beginner-expert pair. Understanding the practices of knowledge sharing in this informal learning environment requires a deep examination that looks outside of a single controlled interaction. In the present study, we examine the practice of knowledge sharing at the ASUW bike shop using ethnographic methods to illuminate our research question: how do people share knowledge in the ASUW bike shop? The results of our investigation show how the values and interests of students and mechanics at the shop drive a culture of knowledge sharing. By examining the different kinds of knowledge present at the shop, we reveal how the nature of knowledge shapes the interactions occur in the process of sharing. We also come to a new understanding of the self-sufficiency culture of do-it-yourself (DIY) that we call do-it-together (DIT). Finally, we suggest research and design implications for communities of practice such as a bike repair shop.