The fourth edition of our Introduction to Programming and Data Analysis with R workshop took place on March 25 and 26, 2026 continuing the tradition of hands-on, beginner-friendly training in R—a powerful tool for data science and statistical analysis. For those who attended previous editions, the structure and content remained familiar and effective: a two-day immersive experience covering the fundamentals of R syntax, Markdown/Quarto, data wrangling, analysis, visualization, and reproducible research practices. If you are new to R, or looking to refresh your skills, this workshop remains a great starting point.
Roland Toth provides an overview of the workshop goals
We’re proud to see a consistent number of participants attending each year. The workshop’s format has been shaped by feedback from past attendees, and we have kept the core curriculum intact to ensure a smooth learning curve. If you missed this year’s session, you can still explore the material through our previous recaps:
These posts offer summaries and key takeaways—perfect for catching up or preparing for the next edition. Stay tuned for updates on the 2027 workshop, and keep coding with R! 📊💻
Join us for the workshop Walking Through and Scrolling Back: Digital Ethnographic Methods for Platform Research, organized by the Methods Lab. On May 12–13, 2026, Dr. Daniela Jaramillo-Dent will introduce participants to innovative ethnographic approaches for studying visual and interactive social media platforms.
This hands-on workshop focuses on two complementary methods: the walkthrough method and the scroll back method. Participants will learn how to engage directly with platform interfaces to better understand how design features, technological mechanisms, and cultural references shape user experiences. In addition, the scroll back method will be explored as an interview-based adaptation, inviting participants to revisit their own platform histories and reflect on their interactions and meaning-making processes. Through practical exercises and examples from digital communities, the workshop offers valuable insights into how discourses emerge and evolve on visual platforms such as Instagram and TikTok.
The workshop is designed for beginner to intermediate researchers who are interested in expanding their qualitative methodological toolkit.
Seats are limited. To learn more, please visit our program page. We look forward to welcoming you!
On March 19th, PhD researchers and postdocs from the WI, together with colleagues from partner institutions including DeZIM, FU Berlin, and WBZ, participated in a hands-on workshop on qualitative interviews as a method of data collection. With participants at different stages of their academic careers, the workshop offered a lively and collaborative space to reflect on both the practical challenges and methodological nuances of interview-based research.
The session began with short inputs from Zozan Baran (FU Berlin), Samuel Zewdie Hagos (DeZIM), and Georg von Richthofen (HIIG). Drawing on their own research experiences, they shared insights into what makes qualitative interviews both rewarding and demanding. Meaningful interviews are not only about asking the “right” questions, but also about building trust, remaining reflexive, and approaching the research process with care and attentiveness.
A recurring theme throughout the workshop was positionality: how researchers are perceived, how they position themselves, and how this shapes the interview situation. While shared language or similar backgrounds can help establish rapport, the speakers emphasized that these factors do not erase existing asymmetries. Instead, they highlighted the importance of continuously reflecting on expectations, power dynamics, and vulnerabilities on all sides of the interaction.
This kind of reflection, participants noted, starts well before entering the field. Engaging deeply with theory and existing literature was framed as essential preparation – captured in the idea of approaching interviews with “an open mind, but not an empty head.”
Beyond interview design, the workshop also explored the broader conditions under which interviews take place. Discussions addressed practical considerations such as the choice of setting (online vs. in person) and how each shapes the interaction. Ethical questions also played a central role, particularly when working with sensitive or potentially traumatic topics.
In the moderated discussion that followed, participants connected these themes to their own research projects. Conversations around locality, navigating difficult situations, and managing the emotional demands of working with vulnerable groups led to a rich exchange of perspectives and strategies.
Rather than offering a fixed set of rules, the workshop highlighted the iterative nature of qualitative interviewing: analyze, adapt, and refine.
Join us for the workshop Qualitative Interviews in Practice, organized by the Methods Lab at the Weizenbaum Institute. On March 19, 2026, three experienced researchers will share insights from their hands-on work with qualitative interviews.
The workshop focuses on practical experience, reflection, and methodological exchange. Each invited expert will give a short input based on their own research practice, addressing topics such as planning and preparing interviews, conducting interviews in different contexts, training interviewers, handling challenging situations, and reflecting on lessons learned. The inputs will be followed by an open discussion, where participants can bring their own projects, questions, and experiences.
The workshop is open to researchers at different stages of their careers—from those preparing their first interviews to those with extensive field experience who are interested in exchanging perspectives and best practices.
To learn more, please visit our program page. We hope to see you there!
The Methods Lab is happy to welcome back the fourth annual Programming and Data Analysis with R workshop, led by Roland Toth (WI). The two-day workshop will take place at the Weizenbaum Institute on Wednesday, March 25, and Thursday, March 26.
Aimed at participants with beginner to intermediate experience, the workshop offers a practical introduction to programming in R. On the first day, participants will learn the basics of coding, key data wrangling techniques, and how to work with Markdown. The second day builds on this foundation by focusing on data analysis through hands-on work with real datasets, allowing participants to explore a research topic with guided support.
Across both days, the workshop combines clear explanations with practical coding exercises, creating an interactive and supportive learning environment for developing core data analysis skills.
Seats are limited to 20 participants. For more information, check out the program page!
The third edition of the Introduction to Programming and Data Analysis with R workshop took place on March 12th and 13th, 2025. Roland Toth with the Methods Lab at the Weizenbaum Institute engaged almost 20 participants with essential methods of data analysis via comprehensive coverage of fundamental R programming concepts and techniques.
Roland asks participants about their former experience with programming
On the first day, Roland guided participants through the basics of R syntax and its integration with Markdown/Quarto in an interactive environment. This included the very basics of programming like functions, objects, and indexing, but also data-related practices like data wrangling, sanity checks, and simple statistical analyses. Among others, participants also gained insight on managing warnings and errors that might stunt the process of coding throughout projects.
On day two, after an introduction to data visualization techniques, participants put their learning into practice: They explored provided survey data and developed a research question, so they could prepare and statistically analyze the data accordingly in R. The result was a reproducible HTML report on the reasoning behind the research question, all data wrangling steps, an exploration of the data set, the analysis, and the results including an interpretation. Attendees also supported each other’s progress whenever possible, while Roland offered personalized guidance.
The workshop alternated between lecture-like and interactive formats
The workshop concluded with a thorough review of useful functions and packages in R. Throughout the event, participants were encouraged to ask questions freely and frequently, and they took the opportunity. The Methods Lab would like to give a great thanks to all guests for their attendance and lively participation!
On November 26 2024, Maximilian Heimstädt, Professor of Digital Governance & Service Design at the Helmut Schmidt University in Hamburg, shared his experiences and expertise in applying qualitative methods to studying algorithms in organizations. This workshop was co-organized by the Methods Lab and the Research in Practice – PhD Network for Qualitative Research, coordinated by Katharina Berr and Jana Heim.
The workshop focused on the complexities of studying algorithms from an interpretivist social science perspective; not only the potentials and risks people ascribe to them, but how they are made sense of, enacted, negotiated and integrated into everyday work settings. Drawing on joint research with Simon Egbert on predictive policing, Max shared how he gained access to public sector organizations, approached team-based multi-sited ethnographic fieldwork and learned to understand complex technologies developed and implemented across different empirical sites and over time.
Maximilian Heimstädt presents theoretical approaches to research algorithms in practice
Max introduced three central theoretical approaches from organization studies and critical data studies to research algorithms in practice: technology trajectories, biographies of algorithms, and data journeys that all afford different analytical lenses and offer more nuanced understandings of algorithmic systems. The approach of technology trajectories expands research of the design and use of technologies by integrating broader questions of power, ideology, and institutional change (Bailey & Barley, 2020). Approaching digitalization research from a biographies approach draws attention to the dynamic development of digital technologies, understood as ‘entangled, relational, emergent, and nested assemblages’ across different organizational contexts and time (Glaser, Pollock, & D’Adderio, 2021). Finally, the data journeys approach allows to ‘focus attention on the life of data as they move through space and time, through different sites and cultures of data practice’, and offers a perspective that is attentive to frictions of such data journeys (Bates, Lin, & Goodale, 2016). Based on an introduction of these approaches, the workshop participants explored how their own research has been (both implicitly and explicitly) informed by these approaches, and discussed their practical and epistemic potentials and limits.
The Idea Behind the ‘Research in Practice’ Workshop Series
Qualitative research often feels polished in academic publications, but the reality is that the process can be quite complex at times, and full of twists and turns. We have created this workshop series to center the ‘backstage’ of qualitative research. The goal is to hear directly from scholars about how they conduct their work – the challenges, the unexpected discoveries and unplanned adaptations, the specific methods and digital tools used, and the strategies that help them arrive at interesting and valuable findings. With this workshop format and research network, we aim to create a space for qualitative researchers within and beyond the Weizenbaum Institute to connect, collaborate, and learn from one another.
What to Expect
Each workshop session in the series brings a new perspective on qualitative (digital) research. Invited scholars walk us through their research processes, focusing on how they have handled the challenges of their work. This includes designing studies, building rapport with research participants, analyzing different kinds of qualitative data, theorizing as method, and navigating ethical considerations. The sessions are interactive, offering opportunities to ask questions, share ideas, and discuss in depth. By opening up the processes behind qualitative research, we hope to demystify the work and facilitate conversations that help researchers at all levels.
If you would like to join our network and to be informed about upcoming events, reach out to Katharina Berr and Jana Heim.
On September 3 2024, Tobias Dienlin from the University of Vienna held the workshop Open Research – Principles, Practices, and Implementation at WI. In this workshop, he gave an overview of Open Research and its motivations, relevance, and formal and technical implementation.
In the first part of the workshop, Tobias argued that certain problems and values in science are the main reasons why researchers should practice Open Research. The problems included the replication crisis (a lack of or low quality of replication studies, especially in the social sciences), questionable research practices (p-hacking, HARKing, errors), and publication bias (journals prefer exciting, expected, and significant results). The values in question included openness as a foundation of science itself and the dedication to scientific advancement instead of emphasizing individuals that achieve it.
Tobias welcomes the participants to the workshop
In the second part, the formal practices of Open Research were discussed. Tobias first clarified the differences between the terms Open Science, Open Research, and Open Scholarship. To achieve a culture of Open Research, he suggested aiming for open access, pre-/post-printing, open reviews, author contribution statements, open teaching, and citizen science. While these practices ususally require additional work, the burden can be lowered by already considering and preparing them in the initial stages of a research project. For instance, by implementing two of the most important Open Research practices: Preregistrations and registered reports.
In a preregistration, any details of a study that are already fixed (e.g., theoretical foundation, research questions, hypotheses, analysis methods, …) are published before conducting the study itself. After conducting the study, the preregistration is referred to in the manuscript, and possible deviations from it are explained. This procedure reduces the possibility and risk of p-hacking and HARKing, and under specific circumstances a preregistration can even take place after the data have already been collected.
A registered report is a more elaborate version of a preregistration. It consists of all parts of a submission that do not involve the analysis and the results. The submission can therefore be reviewed before the data and results even exist. This way, reviewers are not influenced by results and publication bias can be avoided. While a preregistration can be published anywhere, the registered report format needs to be offered by the journal itself.
Participation was enable in-person as well as online
In the last part of the workshop, the focus was on tools and software that help implement Open Research practices. For example, the free-to-use repository OSF can be used for pre-/post-prints, preregistrations, and online supplementary materials such as data, analysis code, or questionnaires. As an exercise, Tobias gave participants the opportunity to implement a basic preregistration or registered report on OSF for a research project they were working on already and try different features, such as linking it to a repository on GitHub. After summarizing the insights of the workshop, Tobias concluded by showing a fitting statement:
Open Science: Just Science Done Right.
During the workshop, participants had plenty of space to ask questions, discuss with everyone or in separate breakout rooms, and interact in various ways. We would like to thank Tobias for this insightful workshop and strongly encourage the implementation of Open Research.
We’re excited to announce our upcoming workshop Open Research – Principles, Practices, and Implementation, which will take place on Tuesday, September 3. This workshop will be conducted both at the Weizenbaum Institute and online, and is open to Weizenbaum Institute members as well as external participants (and the QPD).
Led by Tobias Dienlin, Assistant Professor of Interactive Communication at the University of Vienna, this workshop will equip participants with skills in open research by covering principles of transparency, reproducibility, the replication crisis, and practical sessions on sharing research materials, data, and analyses. It will also include preregistrations, registered reports, preprints, postprints, TOP Guidelines, and initiatives like DORA, CORA, and RESQUE. Participants will engage in drafting preregistration plans and discussing the incentives and challenges of open research, aiming to integrate these practices into their work for a more transparent and robust research community.
For further details, visit our program page. We are looking forward to your participation!
On May 6 2024, Dr. Loris Bennett from FUB-IT at Freie Universität Berlin held the workshop Introduction to High-Performance Computing (HPC) at WI. In this workshop, he gave an overview of the mechanics of HPC and enabled participants to try it out themselves. While the workshop used the HPC cluster provided by FUB-IT as a practical example, most of the contents applied to HPC in general.
Dr. Bennett began with definitions of HPC and core concepts. He described HPC as a cluster of servers providing cores, memory, storage with high-speed interconnections. These resources are shared between users and distributed by the system itself. Users send jobs consisting of one or more tasks to the HPC cluster. Each task will run on a single compute server, also called a node, and can make use of multiple cores up to the maximum available on a node. The number of tasks per node can be set for each job, but defaults to one. Lastly, an HPC cluster may provide different file systems for different purposes. For example, the file system /home is optimized for large numbers of small files used for programs, scripts, and results, while /scratch is optimized for temporary storage of small numbers of large files.
Dr. Bennett explains the difference between different directories
Next, Dr. Bennett proceeded with resource management. When launching a job, many parameters can be set, such as the number of CPU and GPU cores, the amount of memory, and the time used. In order to determine the resources required for jobs, users need to run a few jobs and check what was actually used. This information can then be used to set the requirements for future jobs and thus ensure that the resources are used efficiently. The priority of a job dictates when a job is likely to start and depends mainly on the amount of resources consumed by the user in the last month. A Quality of Service (QoS) can be set per job which will increase the priority of a job, but the jobs within a given QoS will be restricted in the total amount of resources they can use. In addition, it is possible to parallelize tasks by splitting them into subtasks that can be performed simultaneously. Likewise, many similar jobs can be planned efficiently using job arrays.
Finally, participants could log into the FUB-IT HPC cluster themselves either using the command line or graphical interface tools and request first sample jobs. They were shown how to write batch files defining job parameters, use commands to submit, show, or cancel jobs, and check the results and efficiency of a completed job.
The Methods Lab would like to thank Dr. Bennett for his concise but comprehensive introduction to HPC!
Manage Cookie Consent
To provide the best experiences, we use cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.