— with Zeerak Talat (University of Edinburgh) and Flor Miriam Plaza del Arco (Bocconi University)
When: April 3–4, 2025
Where: Weizenbaum Institute, Berlin (Kassenhalle)
Level: Beginner
Category: Theory, data collection, data analysis
Seats: 20 (in-person); no limit (online)
Call for Abstracts & Registration
Abstract: In recent years, language models have seen improved performance in tasks like translation, sorting, and text generation, which has led to their integration into a variety of fields, such as medical contexts, software engineering but also social science. Parallel to this technological proliferation, the emerging field of Responsible AI research has revealed various socio-technical biases in language models which result in discrimination based on attributes such as ethnicity, gender, and more. These findings force both social scientists and computer scientists who are integrating these tools into their research, to reflect how they can detect and mitigate potentially biased outcomes. By doing so, they contribute to an expanding body of literature that critiques how discrimination is conceptualized, how bias measurements are operationalized, and how existing bias benchmarks are constructed. These issues stem from a lack of genuine interdisciplinary collaboration between NLP researchers and researchers from various social science disciplines. This hybrid workshop is meant to provide the space for interdisciplinary exchange toward responsible research on and with language models.
Submission
Submission Deadline is the 23.02.2025. We encourage participants to submit research that applies theories, concepts, and research methods from economics or the social sciences to the evaluation, analysis, and mitigation of language models in order to address socio-technical biases in language technologies. We are also calling for research on problems that might arise from using language models as tools in social sciences and best practices for social scientists to responsibly use language technologies as tools in their research.
Submissions should fit into one (or both) of the following categories:
Addressing Bias Measurement and Mitigation:
- approaches to evaluating and mitigating socio-technical bias
- language model auditing, risk management, and alignment
- personalisation and discrimination.
NLP as a tool in social science and its implications:
- NLP as a research tool, potential risks, and/or potential solutions.
- implications of using NLP tools for measurements.
- validation and best practices.
The maximum wordcount for the abstract is 500 words.
You can find the Call for Abstracts and the registration form here.
The workshop is co-organized with Jan Batzner from the WI research group “Digital Economy, Internet Ecosystem, and Internet Policy” and Fatma Elsafoury from the WI research group “Data, Algorithmic Systems and Ethics”.