Social Media TestDrive: Social Media for Education

Social Media TestDrive is an interactive educational program developed by the SML, in partnership with Common Sense Education, that offers a number of modules about key digital citizenship topics, such as managing privacy settings, smart self-presentation, upstanding to cyberbullying, and news literacy. Each Social Media TestDrive module teaches a set of digital citizenship concepts and allows youth to practice and reflect upon what they have learned using a simulated social media experience within a safe and protected platform.

In August 2019, Social Media TestDrive underwent a nationwide launch alongside Common Sense Education’s new Digital Citizenship curriculum. Currently, 9 Social Media TestDrive modules are linked as extension activities to the corresponding Common Sense Education Digital Citizenship lessons and are freely available to the public. As of September 2022, over 630,000 people have used the tool. You can read more about our most recent updates to the Social Media TestDrive curriculum here.

In Spring 2020, the team was awarded a $300,000 NSF Early-concept Grants for Exploratory Research (EAGER) grant from the Secure and Trustworthy Cyberspace program, titled Addressing Social Media-Related Cybersecurity and Privacy Risks with Experiential Learning Interventions. This grant will support research to identify and create targeted interventions for subgroups of middle school youth that face higher privacy risks online. Another goal of this research is to extend the reach of Social Media TestDrive to a population it was not originally designed for, older adults. To do this, we will use an inclusive privacy-based approach to identify needs and concerns specific to older adults, and then develop and evaluate an intervention suited to their needs.

Starting in Fall 2020, the team will start conducting a multisite outcome evaluation of the Social Media TestDrive platform, using surveys, focus groups, and one-on-one interviews. This outcome evaluation research is a critical next step for understanding and improving the effectiveness of Social Media TestDrive in how it affects digital literacy knowledge, attitudes, skills, and behaviors. This research will be conducted in collaboration with our community partners, New York State 4-H and Cornell PRYDE.

For more information and access to the Social Media TestDrive modules, please visit https://socialmediatestdrive.org/.

 

DiFranzo, D., Choi, Y.H., Purington, A., Taft, J.G., Whitlock, J., Bazarova, N.N. (2019). Social Media TestDrive: Real-World Social Media Education for the Next Generation. In Proceedings of the 2019 ACM Conference on Human Factors in Computing Systems (CHI’19). Glasgow, UK.

Making Social Media a Positive Experience for Young Users (March 26, 2020), Psychology Today.

Test-Drive Social Media for Digital Citizenship (August 19, 2019), Psychology Today.

Deterring Objectionable Behavior in Social Media

This project is an interdisciplinary collaboration that grew out of the Cornell Center for Social Sciences project co-led by Prof. Natalie Bazarova and Prof. Drew Margolin. This project brings together faculty members from Communication, Information Science, and Organizational Behavior, as well as several members of the Social Media Lab, including Aspen Russell, Pengfei Zhao, and Ashley Shea. This is a 4-year project with a multi-tier methodology that involves big data observational studies, mock-up and Truman experiments, simulations, and development of new training modules on TestDrive.

This work seeks to develop a theoretical model for understanding the emergence and maintenance of norms to deter objectionable behavior in self-organized social media spaces where rules are not set by any authority. Objectionable speech, such as misinformation, hate speech, and harassment, is prevalent in these online environments, which raises the question how individuals can foster norms to discourage objectionable speech. Yet while researchers note the influence of social norms within social media and online communities, existing theoretical work on the mechanisms through which such norms emerge focuses on norms promoting cooperation as opposed to norms that deter unwanted contributions. This project will benefit public discourse in online spaces, as well as research and educational outcomes, by: (1) Developing interventions that help citizens become effective objectors to the misinformation, hate speech and harassment they are likely to encounter on social media; (2) Developing a novel research tool for bridging individual and collective experimentation; (3) Providing and disseminating theoretical models of how individual and collective audiences respond to objections to problematic content in different domains, and (4) Raising awareness of the potential for objections, even if well-intentioned, to backfire in particular audience conditions. The result of this research will be a theoretical advancement in the understanding of emergent norms for the deterrence of unwanted behaviors as well as an internally and externally validated multilevel model recommending concrete strategies to be deployed in the real world.

 

Asylum Seekers and Digital Health Tools

Asylum seekers are a vulnerable population that faces many challenges, such as access to resources and navigating information precarity.  While there are many available resources, such as non-profit communities, public healthcare benefits, programs, and others, their use may be complicated because of asylum seekers’ privacy concerns, family and health situations, and technology experience. Studying how asylum seekers access resources, find information online, and use technology can provide an understanding of their unique needs and important design considerations for digital tools or interventions that could help bridge access to resources.

These issues are exacerbated in the current state of the world with the political climate in the U.S., widespread misinformation online, and the COVID-19 pandemic; therefore, helping this population in accessing resources, such as legal information and public healthcare benefits, is a particularly timely need.

To help asylum seekers and refugees to find relevant information, we created a digital resource, the RightsforHealth website, which contains information about public benefits available to immigrants based on their immigration status. 

This project is part of a wide collaboration between teams in the Cornell Social Media Lab, Cornell Law School, and Weill Cornell Medicine.

To learn more about this project, please see papers and presentations below.

Relevant publications/presentations:

Bhandari, A., Freed, D., Pilato, T. Taki, F., Kaur, G., Yale-Loehr, S, Powers, J., Long, T., Bazarova, N.N. (Forthcoming) Multi-stakeholder Perspectives on Digital Tools for U.S. Asylum Applicants Seeking Healthcare and Legal Information. PACM HCI (CSCW 2022).

Freed, D., Bhandari, A., Yale-Loehr, S., Bazarova., N.N. Using Contextual Integrity to Evaluate Digital Health Tools for Asylum Applicants”. 4th Annual Symposium on Applications of Contextual Integrity, 2022.

Migrations project helps refugees claim health care rights (March 29, 2022), Cornell Chronicle

Intervening to Stop Cyberbullying

To date, we have completed a series of studies investigating cyberbullying and bystander interventions. The goal of this research is to investigate bystander interventions online, and the ways in which interventions can be encouraged in cyberbullying situations. 

One study, led by former SML post-doc Dominc DiFranzo, explored the effects of  design on bystander intervention using a total social media simulation (Truman). Depending on the experimental condition, participants were given varying information regarding audience size and viewing notifications, intended to increase the sense of personal responsibility in bystanders. Results from this study indicate that design changes that increased the participants’ feelings of accountability prompted them to accept personal responsibility in instances of cyberbullying. 

Another study, led by recent PhD Sam Taylor, examined the role of empathy and accountability in bystander intervention. In this study, design interventions were developed that aimed to increase accountability and empathy among bystanders. The results indicate that both accountability and empathy predicted bystander intervention, but the types of bystander actions promoted by each mechanism differed.

A different study, led by former PhD student Franccesca Kazerooni, investigated how different forms of cyberbullying repetition influenced the appraisal of instances of cyberbullying, and the bystanders’ willingness to intervene. This study found that increasing the number of aggressors on Twitter does increase the likelihood of each stage of the bystander intervention model, but only under certain conditions.

 

Taylor, S., DiFranzo, D., Choi, Y. H., Sannon, S., and Bazarova, N. (2019). Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media. Proceedings of the ACM on Human Computer Interaction Journal (PACM CHI Journal), 3, 1-26.

DiFranzo, D., Taylor, S. H., Kazerooni, F., Wherry, O. D., & Bazarova, N. N. (2018). Upstanding by Design: Bystander intervention in cyberbullying. In Proceedings of the 2018 ACM Conference on Human Factors in Computing Systems (CHI’18).

Kazerooni, F., Taylor, S. H., Bazarova, N. N., & Whitlock, J. L. (2018). Cyberbullying bystander intervention: The number of offenders and retweeting predict likelihood of helping a cyberbullying victim. Journal of Computer-Mediated Communication, 23(3), 146-162.

Youth Tech Safety

While technology use has created opportunities for learning and social connection, it can also be used as a tool of abuse in teenagers interpersonal relationships. Our research focuses on youth digital safety, privacy, and security. We investigate how different stakeholders, such as parents, caregivers, educators, and youth respond to digital risks and harms within the wider context in which they occur. We develop tools and technologies for prevention and intervention, and engage in policy efforts to improve digital privacy and safety for youth. This research examines the suitability of existing resources, identifies how they might be improved, and aims to create a publicly available guide for stakeholders that provides resources for privacy education and mitigation strategies to protect youth.

Prosocial Behaviors in the Digital Age

The capacity of digital technologies to facilitate both anti-social and prosocial behaviors has become a pressing challenge with significant science and policy implications.

(more…)

Critical Social Media Literacy

Informed by the critical literacy framework (Freire, 1970; Lewison et al., 2008), this systematic literature review study investigates how the competences necessary to empower social media users map to an integrated conceptualization of social media that accounts for their unique features and broad range of opportunities and risks (Vanwynsberghe, 2014) as well as for the outcomes that emerge from users situated interaction with these technologies (Costa, 2018).

 

References

Costa, E. (2018). Affordances-in-practice: An ethnographic critique of social media logic and context collapse. New Media & Society, 20(10), 36413656. https://doi.org/10.1177/1461444818756290

Freire, P. (1972). Education: Domestication or liberation? Prospects, 2(2), 173181.

Lewison, M., Leland, C., & Harste, J. C. (2008). Creating critical classrooms: Reading and writing with an edge. Lawrence Erlbaum.

Vanwynsberghe, H. (2014). How users balance opportunity and risk. A conceptual exploration of social media literacy and measurement [Doctoral dissertation]. Ghent University.

Empowering Parents in Tweens’ Social Media Privacy Education

Parents and caretakers play integral roles in tweens’ lives, especially in teaching tweens how to responsibly and safely navigate an increasingly digitally integrated world. As social media usage grows in younger age groups, there is an imminent need to support youth to safely build healthy and productive habits and practice skills to navigate risks online. Despite the growth in available resources and educational interventions (e.g., digital literacy and citizenship curriculum and online guides), many of these resources are not tailored to accommodate parental perspectives and resource constraints due to socioeconomic factors , family structure, cultural alignment, digital competencies, and self-efficacy. For instance, parents may hold varying attitudes toward technology use, varying access to resources, and mitigation strategies that ultimately shape how they support their children in learning, developing, and practicing these skills. This project aims to empower parents in supporting tweens’ social media privacy education.

The team plans to apply these insights towards Social Media TestDrive, which was developed by our team in partnership with Common Sense Education and 4-H, for use by educators, families, and individuals in a variety of settings ranging from in-class or at home.

In Fall 2020, the team received a Facebook Research Award for People’s Expectations and Experiences with Digital Privacy to support this project.

Distress and Disclosure on Social Media

After experiencing negative life events or emotions, individuals often feel they need to disclose their experiences and feelings to others. Although people may not routinely use social networking sites (SNSs) to disclose distressing personal information due to concerns pertaining to self-presentation and privacy, recent research has shown that people sometimes use SNS to discuss negative experiences and they can be a critical coping resource. As SNSs offer opportunities for those seeking social support and a place to vent negative emotions, which helps people in distress, this project will provide an important understanding about how, when, and why people disclose distressing information on SNS.

Pengfei Zhao is working on a project, which aims to investigate the association between SNS affordances, the imagined audience, and distress disclosure on SNSs.