Chatbots for cyberbullying bystander education

IS grad student Haesoo Kim and PIs Natalie Bazarova and Qian Yang study the use of chatbots in cyberbullying bystander education.

Chatbots for cyberbullying bystander education

This study focuses on developing a chatbot system designed to support learners in becoming effective upstanders in cyberbullying situations. We explore what barriers bystanders often face in the intervention process, and build upon these findings to provide effective persuasion strategies and practical advice to train bystanders into standing up against online harassment. By developing such educational tools and programs, we aim to create a tool that enhances users confidence and ability to address cyberbullying by leveraging the flexibility and utility of large language models, contributing to a safer and more supportive online environment.

 Collaborators: Haesoo Kim, Natalie Bazarova, Qian Yang.

Tech, health, and policy

Tech, health, and policy

As technology increasingly permeates our daily lives, its impact on health and healthcare also increases. In this project, we explore the complexities in emerging various digital threats, as well as how various healthcare tools and policy can respond to them. We also explore how novel technological tools and practices, such as telehealth practices and various digital healthcare systems, are introduced to address healthcare needs, and how they are used in practice. We are currently conducting a study on youth healthcare practitioners knowledge and perspectives regarding digital risk factors and digital abuse, with a particular focus on screening processes. We are interviewing youth practitioners about their experiences addressing digital abuse and related risks, the extent to which they have received practical training to prepare for such cases, and the barriers they face in providing care for affected youth.

Collaborators: Haesoo Kim, Diana Freed, Diana Freed, Marianne Sharko, Ermira Uldedaj.

Self-Control in Manipulative Algorithmic Environments

The SML tests the effects of two cognitive interventions to support emergent adults digital self-control.

Self-Control in Manipulative Algorithmic Environments

Many features of social media platforms and websites – such as infinite scroll and autoplay – are intentionally designed to bypass users reflective decision-making system to deliver effortless, short-term gratifications. While these features feel gratifying in the moment, they are often counterproductive because they undermine meaningful reward and well-being over the long term. Efforts to promote digital self-control have typically focused on technological fixes to override impulsive tendencies. This study takes a different approach. Leveraging insight from cognitive science, it presents a constructive dual-pathway approach to promote digital self-control. Moving beyond deficit-based models that frame self-control failure as a sign of resource depletion, it embraces a process model based on attentional shifting to bring the self-control dilemma into focus, and motivational anchoring to align valuation of choice with long-term rewards. Through a two-wave longitudinal experiment, we test the effects of two cognitive interventions to support emergent adults digital self-control.

Social Norm Project

The Social Norm Project is a collaborative team studying harassment and objections to harassment online.

Social Norm Project

This study investigates the question: When two competing behaviors coexist, which is perceived as normative? We focus on harassment and objection to harassment in online communities, exploring how their relative frequencies shape perceptions of social norms. Specifically, we examine how exposure to these behaviors influences users perceptions of what is acceptable and how these norms affect individuals intentions to act and their likelihood of objecting to harassment. We conducted experimental studies using The Truman Platform, a simulated social media site, to identify mechanisms behind social norm formation and how others behaviors shape newcomers understanding and actions. We are currently expanding to field studies, with the goal of highlighting the importance of proactive responses and developing an intervention that promotes user-driven upstanding.

Collaborators: Inhwan Bae, Natalie Bazarova, Dominic DiFranzo, Winice Hui, Rene Kizilcec, Han Li, Drew Margolin, Pengfei Zhao.

Narratives in Counterspeech: A Field Experiment on Social Media

Narratives in Counterspeech:

A Field Experiment on Social Media

This project investigates the feasibility and effectiveness of narrative-based counterspeech in mitigating objectionable content in real-world online settings, specifically targeting vaccine misinformation and anti-immigrant rhetoric on social media platforms such as Reddit. Furthermore, the study examines whether exposure to such narratives motivates prosocial interventions from bystanders, empowering them to effectively counteract online problematic content.

Collaborators: Han Li, Natalie Bazarova, Dominic Difranzo, Rene F. Kizilcec, Drew Margolin.

AI-Counter Speaker

AI-Counter Speaker

This study investigates how online communities respond to AI agents that engage in counterspeech against problematic content. Counterspeech, or responses that challenge toxic or hateful messages, has shown promise in promoting prosocial norms, but delivering it effectively remains difficult. Recently, generative AI and large language models have been explored as potential tools for producing counterspeech at scale. However, the use of AI in this role introduces new questions. Can AI agents be seen as legitimate and authentic voices in community discourse? Will people accept moral or empathetic messages from nonhuman sources? And should AI agents present themselves as fellow community members or simply as technical tools? This study aims to address these questions by examining the effectiveness and reception of AI-generated counterspeech in online communities.

Collaborators: Pengfei Zhao, Natalie Bazarova, Drew Margolin

Deterring Objectionable Behavior in Social Media

This project is an interdisciplinary collaboration that grew out of the Cornell Center for Social Sciences project co-led by Prof. Natalie Bazarova and Prof. Drew Margolin. This project brings together faculty members from Communication, Information Science, and Organizational Behavior, as well as several members of the Social Media Lab, including Aspen Russell, Pengfei Zhao, and Ashley Shea. This is a 4-year project with a multi-tier methodology that involves big data observational studies, mock-up and Truman experiments, simulations, and development of new training modules on TestDrive.

This work seeks to develop a theoretical model for understanding the emergence and maintenance of norms to deter objectionable behavior in self-organized social media spaces where rules are not set by any authority. Objectionable speech, such as misinformation, hate speech, and harassment, is prevalent in these online environments, which raises the question how individuals can foster norms to discourage objectionable speech. Yet while researchers note the influence of social norms within social media and online communities, existing theoretical work on the mechanisms through which such norms emerge focuses on norms promoting cooperation as opposed to norms that deter unwanted contributions. This project will benefit public discourse in online spaces, as well as research and educational outcomes, by: (1) Developing interventions that help citizens become effective objectors to the misinformation, hate speech and harassment they are likely to encounter on social media; (2) Developing a novel research tool for bridging individual and collective experimentation; (3) Providing and disseminating theoretical models of how individual and collective audiences respond to objections to problematic content in different domains, and (4) Raising awareness of the potential for objections, even if well-intentioned, to backfire in particular audience conditions. The result of this research will be a theoretical advancement in the understanding of emergent norms for the deterrence of unwanted behaviors as well as an internally and externally validated multilevel model recommending concrete strategies to be deployed in the real world.

 

Asylum Seekers and Digital Health Tools

Asylum seekers are a vulnerable population that faces many challenges, such as access to resources and navigating information precarity.  While there are many available resources, such as non-profit communities, public healthcare benefits, programs, and others, their use may be complicated because of asylum seekers’ privacy concerns, family and health situations, and technology experience. Studying how asylum seekers access resources, find information online, and use technology can provide an understanding of their unique needs and important design considerations for digital tools or interventions that could help bridge access to resources.

These issues are exacerbated in the current state of the world with the political climate in the U.S., widespread misinformation online, and the COVID-19 pandemic; therefore, helping this population in accessing resources, such as legal information and public healthcare benefits, is a particularly timely need.

To help asylum seekers and refugees to find relevant information, we created a digital resource, the RightsforHealth website, which contains information about public benefits available to immigrants based on their immigration status. 

This project is part of a wide collaboration between teams in the Cornell Social Media Lab, Cornell Law School, and Weill Cornell Medicine.

To learn more about this project, please see papers and presentations below.

Relevant publications/presentations:

Bhandari, A., Freed, D., Pilato, T. Taki, F., Kaur, G., Yale-Loehr, S, Powers, J., Long, T., Bazarova, N.N. (Forthcoming) Multi-stakeholder Perspectives on Digital Tools for U.S. Asylum Applicants Seeking Healthcare and Legal Information. PACM HCI (CSCW 2022).

Freed, D., Bhandari, A., Yale-Loehr, S., Bazarova., N.N. Using Contextual Integrity to Evaluate Digital Health Tools for Asylum Applicants”. 4th Annual Symposium on Applications of Contextual Integrity, 2022.

Migrations project helps refugees claim health care rights (March 29, 2022), Cornell Chronicle

Intervening to Stop Cyberbullying

To date, we have completed a series of studies investigating cyberbullying and bystander interventions. The goal of this research is to investigate bystander interventions online, and the ways in which interventions can be encouraged in cyberbullying situations. 

One study, led by former SML post-doc Dominc DiFranzo, explored the effects of  design on bystander intervention using a total social media simulation (Truman). Depending on the experimental condition, participants were given varying information regarding audience size and viewing notifications, intended to increase the sense of personal responsibility in bystanders. Results from this study indicate that design changes that increased the participants’ feelings of accountability prompted them to accept personal responsibility in instances of cyberbullying. 

Another study, led by recent PhD Sam Taylor, examined the role of empathy and accountability in bystander intervention. In this study, design interventions were developed that aimed to increase accountability and empathy among bystanders. The results indicate that both accountability and empathy predicted bystander intervention, but the types of bystander actions promoted by each mechanism differed.

A different study, led by former PhD student Franccesca Kazerooni, investigated how different forms of cyberbullying repetition influenced the appraisal of instances of cyberbullying, and the bystanders’ willingness to intervene. This study found that increasing the number of aggressors on Twitter does increase the likelihood of each stage of the bystander intervention model, but only under certain conditions.

 

Taylor, S., DiFranzo, D., Choi, Y. H., Sannon, S., and Bazarova, N. (2019). Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media. Proceedings of the ACM on Human Computer Interaction Journal (PACM CHI Journal), 3, 1-26.

DiFranzo, D., Taylor, S. H., Kazerooni, F., Wherry, O. D., & Bazarova, N. N. (2018). Upstanding by Design: Bystander intervention in cyberbullying. In Proceedings of the 2018 ACM Conference on Human Factors in Computing Systems (CHI’18).

Kazerooni, F., Taylor, S. H., Bazarova, N. N., & Whitlock, J. L. (2018). Cyberbullying bystander intervention: The number of offenders and retweeting predict likelihood of helping a cyberbullying victim. Journal of Computer-Mediated Communication, 23(3), 146-162.