Call for Contributions: “Data, Archives, & Tool Demos” at the 2026 DGPuK Annual Conference

The Methods Lab is excited to announce that the “Data, Archive & Tool Demos” special session will return at the 2026 DGPuK Annual Conference in Dortmund. The event will be co-organized by Methods Lab lead Christian Strippel with Johannes Breuer, Silke Fürst, Erik Koenen, Dimitri Prandner, and Christian Schwarzenegger, in collaboration with the GESIS Methods Hub, a community-oriented network for computational tools and resources.

This year’s session is a continuation of the format introduced at the 2024 DGPuK Conference in Erfurt, in a well-received event. The aim moving forward is to keep the discussion going, and potentially establish this format as a regular feature of the annual conference. Full details of the call for 2026 contributions are available here

Researchers are invited to submit 200–300 word abstracts (German or English) by January 19, 2026. Eligible contributions must have not already been presented in the previous DGPuK sessions, are required to be available for scholarly reuse and must not be managed commercially. Contributions may also be resubmitted only if they have changed significantly, and this must be stated explicitly.

Spotlight: Analyzing Digital Aesthetics—AR Filters and the Commodification of Identity

The tendency to measure our worth against unrealistic beauty standards is hardly a new concern, yet it persists even as feminist theory challenges these ideals and media efforts claim to represent diverse bodies. Platforms like TikTok, for example, argue their core values prioritize diversity, with policies that uplift underrepresented communities. Still, self-objectification of both women and men is largely driven by mainstream media (Rodgers, 2015), significantly impacting users’ self-esteem. An increasing body of research emphasizes self-objectification in reported decline of cognitive functioning and is associated with many mental health issues such as depression and eating disorders (Fredrickson and Roberts, 1997). But with such widespread awareness, why does self-objectification remain so prevalent? 

Researchers Corinna Canali and Miriam Doh uncover the “truth” behind the algorithmic structures that construct identities and perpetuate this self-objectification online. Bodies and forms of self-expression deemed as “true,” “normal,” or “obscene,” are not actually objective categories. They are rather social constructs created by institutions who have the power to define them, such as governments, corporations, or tech platforms. One structure that reinforces self-objectification, and distorts self-perception is the use of augmented reality filters. These AR filters have become popular across social media networks that an estimated 90 percent of American young adults visit daily, with 58 percent of teens using TikTok alone (Bhandari & Bimo, 2022). The ways in which these filters, incentivized by algorithmic systems, affect identity, prefer uniformity, and marginalize diversity are explored in Filters of Identity: AR Beauty and the Algorithmic Politics of the Digital Body, co-authored by Corinna and Mariam Doh.

Corinna is a research associate at the Weizenbaum Institute and Design Research Lab, and her project partner, Mariam, is a PhD student with the AI for the Common Good Institute / Machine Learning Group at the Université Libre de Bruxelles. Corinna’s initial inspiration began with a study on nudity in digital and social media contexts. This led to a broader analysis of digital governance engrained “obscenifying” beliefs, where systems are built on the assumption that certain bodies, identities, and forms of expression are inherently obscene. From this, she has developed extensive experience across institutions in examining how platforms and policies act as regulatory infrastructures that shape who is seen, censored, or silenced in digital environments.

In an interview conducted for this article, Corinna discusses how AI-driven beauty filters do more than simply mirror existing beauty norms, but actively construct and reinforce them.

Although TikTok’s effect house policy has been implemented to ban filters that promote unrealistic beauty standards, the platform continues to circulate effects that do exactly that. As an example, Corinna analyzes the Bold Glamour filter, which edits users’ faces in ways that are difficult to detect but tend to favor Eurocentric and cisgender features such as lighter skin, narrow noses, and larger eyes. It also frequently misidentifies or distorts the facial features of certain racial groups. The highest “misclassification rates” were detected in black women, where the filter failed to accurately detect a person’s face or altered their features in ways that don’t reflect their real appearance. This systematic exclusion of non-conforming bodies manifests into a larger trend in technological design, where digital systems tend to perpetuate dominant norms. Bodies that fall outside these norms, including those that are disabled, fat, Indigenous, or Black, are frequently invisible or misrepresented within the algorithm. 

Corinna argues that these marginalized identities also have the most underrepresented narratives when it comes to bias in content moderation:

“Also, if you (tech company) own all the means of knowledge, circulation and production it’s quite difficult to allow any other narrative to exist within these spaces.” 

She then identifies a serious problem in the subtle ways this discrimination persists unnoticed outside academic circles. She noted that while users may recognize individual instances of bias, they often lack the tools or social power to challenge them within digital spaces.

Notably, the marginalization of these groups has deep roots in the broader historical context of racial capitalism, a system in which racial hierarchies are used to justify and sustain economic exploitation (Ralph & Singhal, 2019). When asked about whether these systems were just reflecting the norms obtained from biased datasets, or if they are actively structured to serve deeper capitalist goals, Corinna describes the influence of deliberate human intervention:

“The algorithm does not work on its own. It works within a system that is constantly being tweaked by real-time content moderation from humans.” 

She underlines the active decisions made by people moderating content from policies servicing business and corporate models, ultimately benefitting the platform’s profitability. 


The first row shows how the filter incorrectly applies a female-targeted transformation to a male face, while the second row shows a female face altered with a male-targeted filter.

Research shows that frequent use of appearance-enhancing filters is associated with increased body image concerns and higher levels of body dissatisfaction (Caravelli et al., 2025). This lowered self-esteem carries significant economic consequences, with body dissatisfaction-related issues costing an estimated $226 and $507 billion globally in 2019 alone (ibid.). Such widespread dissatisfaction is not incidental but is embedded within the algorithmic logics of these platforms.

To elaborate, these filters produce idealized versions of the self that adhere to capitalist interests by making users more desirable, and easier to manipulate with ads. One’s face becomes a data point, and their emotions and desires are recognized as commodities. Additional findings emphasize how these automated categorization systems and targeted advertisements actively shape user identity. According to Bhandari and Bimo (2022), the system influences how people construct, perform, and manage their sense of self in ways that benefit the platform economically but may be harmful to the user.

Corinna addresses this when asked whether TikTok truly allows space for authentic self-expression, or if users are instead shaping their identities to align with platform incentives. She feels conflicted, but responds with insight into how the platform promotes self-governing systems, where the primary commodities are users, their data, and their attention:

“When there is this constant negotiation between your own self expression and agency against this corporate logic of profit, it is always difficult to understand to what extent you are actually free to be who you want to be.”

To promote a more authentic version of self-expression, Corinna’s work suggests filters that give inclusive filtering options, allowing users to specifically choose their own aesthetic preferences. However, the persistence of algorithmic bias reveals the challenge of resisting systemic power: even when AR filters are redesigned to reject dominant beauty standards, the algorithm may continue to privilege a certain aesthetic.

Bearing this in mind, Corinna introduces a transdisciplinary approach utilizing design and theory. She describes the nature of this problem as multifaceted and interconnected, necessitating an equally dynamic solution that addresses the whole system. She describes the benefits of design, as her background in visual cultures allows her to embrace visual ethnographic methods to interpret the implications of images for a unique perspective on bias. She says,

“Images are a product of visual tradition that has its own biases outside of technology. Technology has inherited a lot of this bias, discrimination and representation from other traditions that live outside of it.” 

In this way, Corinna uses design as a tool to reveal and critique bias. This approach does not require technical expertise, but instead encourages people to recognize the significance of what they’re looking at, and in doing so, begin to see the discriminatory practices embedded within it.

Corinna’s next steps address both the internal complexity of these systems and the broader socio-political contexts in which they function. She offers one possible solution in creating opportunities to rethink and rebuild these platforms, but for this, there needs to be space for alternative systems. Corinna highlights that it is extremely difficult now to access any social network that does not come from big tech companies, because they have created conditions where they hold a near-monopoly over these platforms and infrastructures. The process will require time and multiple layers of reworking, starting with how users are educated about these technologies.

To address the lack of adequate understanding on these issues, she plans to develop tools that promote media literacy by helping users acknowledge what these systems are, how they function, and what their greater societal impacts may be. This perspective would stress the reciprocal relationship between society and technology, emphasizing how each continuously shapes the other.

The Design Research Lab and Berlin Open Lab have many projects that explore research through design, uniquely combining critical theory from the humanities with hands-on material production. The lab takes a different path from traditional product design by emphasizing critical reflection on both how products are made and the ways they shape society. Several research projects in this lab consider this approach, including one that addresses the theme of surveillance capitalism discussed here. For a deeper look into how the lab works, check out the spotlight article linked here!

New Publication: A Mixed Methods Study of Smartphone Use

The Methods Lab is excited to share a new publication in the Journal of Quantitative Description: Digital Media, authored by data scientist Roland Toth along with researchers Douglas Parry, and Martin Emmer from the Weizenbaum institute. This paper, titled From Screen Time to Daily Rhythms: A Mixed Methods Study of Smartphone Use Among German Adults, explores how much, when, how, and under which circumstances Germans use their smartphones throughout the day. 

The study presents a detailed analysis of smartphone usage characteristics, showing engagement in aggregate as well as how it varies throughout the day, and associations with the user’s socio-demographic characteristics. To do so, the authors used a mix of the Mobile Experience Sampling Method (MESM), Android event logs, and iOS data donations. They reveal distinct temporal patterns that are beneficial for understanding the broader contexts, motivations, and situational factors shaping different types of mobile interactions. It also outlines practical implications for researchers employing longitudinal and real-time measurement methods in interdisciplinary and social science research. This comprehensive analysis provides a strong basis for further exploration of the psychological, social, and behavioral dimensions of smartphone use.

Tool presentation: ChatAI (Academic Cloud)

Most commercial, browser-based generative AI tools have usage or licensing restrictions that prevent users from exploring their full potential. All members of the Weizenbaum Institute now have access to a new chatbot providing unrestricted use of generative AI, thanks to an institution in Göttingen that locally hosts a range of large language models under the name ChatAI. The platform is essentially like ChatGPT, but it is hosted in Germany, and can be used free of charge without licensing costs. It also includes other useful tools, such as ImageAI, which generates images from text prompts, and VoiceAI, which can interpret human speech.

To get started, simply visit the Academic Cloud website and click on Login in the top right corner. On the login page, choose Federated Login on the right-hand side, then select Weizenbaum Institute from the list of institutions. From there, you can log in using your WI account credentials and follow the on-screen instructions.

After logging in, open the ChatAI tool in the list of services. Within ChatAI, the model selector at the top center of the interface allows you to choose from different language models depending on your needs. Click the ChatAI logo in the top left corner to easily switch between different tools to generate text, images, or work with audio input. With unlimited access to these tools, you can find efficient new ways to enhance your research and creative projects.

Research Stay at Dartmouth College

This spring, the Methods Lab student assistant, Diana Ignatovich, spent four months at Dartmouth college to research octopus cognition and visual processing.

Hidden in the New England wilderness is an underground laboratory housing three male Octopus bimaculoides that were shipped from the Pacific Ocean for non-invasive study using underwater electroencephalography (EEG). One of the octopuses was named Joseph—after the Weizenbaum Institute’s namesake, computer scientist Joseph Weizenbaum.

Rather than euthanizing the octopus or holding its tentacles down to understand its incredibly complex and decentralized nervous system, the underwater EEG apparatus used here is methodologically unique, to ensure no octopuses were harmed. This method records the brain’s electrical activity by detecting signals from groups of neurons, which are amplified by the EEG machine and studied as brain waves. To record this data, the experimental tank consisted of a clear plastic cube with two printed circuit boards on top and bottom, lined with tripolar concentric ring electrodes and submerged in saltwater. The tanks were also enriched with toys to encourage cognitive stimulation, and all experimental procedures were conducted in accordance with ethical guidelines approved by the Institutional Animal Care and Use Committee (IACUC).

The primary project for this stay was in assessing the neural critical flicker fusion frequency (CFF) of the octopus via steady-state visual evoked potentials of the EEG power spectrum. The CFF threshold refers to the point at which a rapidly flickering light is perceived as steady, which indicates the speed of visual information processing in the brain. This gives insight for neural and visual processing efficiency as well as cognitive functions such as attentional control and overall responsiveness to changing environments. It is also commonly measured in psychophysics via behavioral paradigms, where a participant indicates the observed flicker fusion. However, since Joseph could not tell us about this boundary, the threshold was marked by a drop in EEG signal amplitude as flicker perception diminished. These experiments, performed using LED light at different brightness levels, were then repeated with human testing for comparison, to determine whether octopuses are better adapted to low-light conditions due to their underwater habitat.

The octopuses demonstrated higher critical flicker fusion frequency thresholds compared to humans, likely due to their evolutionary history and environmental conditions, where quick visual responses are necessary to spot prey or avoid predators.

It’s extraordinary that octopuses, despite lacking the rod and cone photoreceptor cells found in most eyes, have adapted to match or even exceed the critical flicker fusion thresholds of creatures like humans. These results are an exciting glimpse into the diversity and complexity of how other species experience their own vastly different worlds in nature.

Reference: “When the Field is Online” Newsletter

When the Field Is Online is a monthly newsletter by Janet Salmons, PhD, qualitative methodologist, and author of 12 books in academic writing and research collaboration. Building on this extensive experience, Janet introduces thoughtful and creative strategies to overcome methodological challenges, and to connect meaningfully with others in an increasingly digital research landscape.

Her latest topics have reviewed virtual focus groups, ethics in remote research, reflexivity in data analysis, and tips for recruiting authentic voices. Each issue is unique, with a combination of original essays, thoughtful analysis, instructional videos, open access resources, and a bonus of her own self-drawn illustrations. The blog also links other relevant content and reflections on research obstacles from designing effective studies to asking stronger research questions. For anybody interested in expanding their qualitative research skills, strengthening digital communication, or discovering new ways to connect virtually, When the Field Is Online offers thorough guidance and novel ideas to inspire your work.

Most of this newsletter is freely available, but to access the full content, please sign up here to become a paid subscriber!

Spotlight: Dein Feed, deine Wahl

Whether it’s unwinding after a long day or killing time in line for coffee, social media has become a major source of entertainment and connection. The immediate satisfaction it often brings can boost the mood, but it’s not just entertainment we’re absorbing. Amidst the lighthearted content, is exposure to a wide range of information, much of which we may not fully process. Research suggests a link between high social media usage and lower self-control, which may also perpetuate processes of dissociation that many refer to as mindless scrolling. As we passively absorb media from smiling koalas to political protesting, how are we actually understanding the more consequential information and even shaping our opinions?

Lion Wedel and Jakob Ohme consider these influences in collaboration with Bayerischer Rundfunk, Stuttgarter Zeitung, and the University of Zürich in their project, Dein feed, deine Wahl, (Your Feed, Your Choice/Election). In this ongoing initiative, TikTok users are encouraged to donate their data in an anonymous manner and in turn receive a direct analysis of the political content and parties appearing in their feed.

Jakob and Lion shared their perspectives in an interview, offering insight backed by their expertise in political communication and digital media research.

To begin, Lion and Jakob discuss the broader implications of media on shaping one’s ideological framework. Specifically, which types of content shape opinions the most, and how do they attract attention?

To this, Lion responds,

“The more time you spend preparing a video, the less popular it gets. Like quick and dirty typically works better.”

This was in reference to a podcast titled, Was tun? Die Strategien hinter dem Comeback der Linkspartei (1/3): Wie Heidi Reichinnek die AfD auf TikTok überholte, in which Felix Schulz, social media manager in Heidi Reichinnek’s office, explores the strategic use of TikTok to engage young voters. From this, Lion highlights the influential processes through which opinions are likely to form online. He quotes the manager’s technique, asserting that his videos did so well because they managed to make a compelling statement in the first one to three seconds of the clip. Even in regard to political news, Lion describes, “It does not matter if it’s true or false, or if it’s catchy or misleading, you just have to get that attention grabbing moment.” He concludes by suggesting this content that keeps its audience engaged to the video’s end, is more likely to shape opinions.

But what are the implications to the broader democratic landscape in Germany if people form political opinions based on whatever content is most stimulating in their feed, regardless of its credibility? Can the use of traditional media repertoires play a role in fact-checking political discourse?

Statistics of the 2025 Weizenbaum panel report exploring last year’s political participation in Germany may suggest otherwise. Since 2021, there has been a decline in the use of traditional media sources such as newspapers and radio for news consumption, while internet usage has remained consistently stable. Moreover, over 60% of voters up to the age of 30 receive their political information from social networking (Schöffel et al., 2025), suggesting platforms such as TikTok are inevitably shaping how many engage in political debate.

Given recent civic tensions in Germany, including shifts in numerous elections and widespread protests, there appears to be a similar polarization in the spread of social media information. As many are more likely to express their opinions than change them, and online discussions often dominate in one direction of opinion (Xiong & Liu, 2014), raising awareness of these influences not only to our personal perspective but also the broader political environment becomes critical.

According to Jakob, the nature in which algorithmic selection processes function, “affect political landscapes to an extent that we probably ten years ago did not think was possible.” He outlines his interpretation of how algorithmic platforms contribute to political shifts, attributing the cause to the often passive behavior of individuals online, where their feeds reflect the content they prefer. As seen in previous research, social networking individuals may feel a reduced sense of self awareness and volition (Baughan et al., 2022), often consuming the content that is most interesting and aligns with their worldview. Jakob concurs, stating, “We can all function like this from time to time.”

He continues,

“There is content that works better with the algorithm and especially on TikTok. We can see that as soon as something works, it will capture a lot of attention, but as soon as something doesn’t work, it will completely drown. There are certain kinds of political content that work better and that is the emotional appeal, opinionated, negative and extreme information.”

Jakob asserts that certain political parties, especially those on the right-wing, are more adept at taking advantage of this dynamic, as their content tends to perform better online. Algorithms may as a result appear to favor them, not because of any inherent political bias, but because these parties successfully leverage a system that operates based on audience engagement patterns to maximize their impact.

Consequently, the Dein Feed, deine Wahl initiative establishes an objective foundation for identifying usage patterns within TikTok’s algorithms that ultimately contribute to the broader political climate. Jakob emphasizes the significance of these usage trajectories, aiming to explore their association with voting decisions and provide a descriptive overview of the extent to which individuals encounter political party-related content. In addition to this, he expresses the ambition to reverse-engineer algorithms to mitigate their effects. By examining how usage patterns influence algorithmic decisions and lead individuals to encounter more of the same content over time, researchers are better equipped to address these dynamics. This approach is especially essential given that analyzing video content and user interaction data has been largely unexplored due to its methodological complexity.

Overall, this project paves the way for regulating social media platforms in the long term, with the hope that it remains accessible for prospective political debates and elections. However, to move forward, Lion identifies the collection of data donations as one of the first priorities. The quality of the analysis significantly depends on the number of public contributions volunteered to their data donation lab.

Therefore, if you’d like to support the project or are curious of the political makeup in your own feed, please donate your usage here!

New Publication: Ethics of Data Work

Machine learning is becoming increasingly central to academic research, yet it often depends on data workers in exploitative conditions whose contributions are largely overlooked in ethical guidelines and unacknowledged within the academic community.

Last year, the Methods Lab outlined the aims of a project to target this issue in a short blog post. We’re now excited to announce the resulting published discussion paper: “Ethics of Data Work: Principles for Academic Data Work Requesters.”

This paper builds on the insights of an interdisciplinary group of scholars, practitioners, and data workers, alongside expert workshops held at the Weizenbaum Institute in 2024. It organizes practical principles for engaging more ethically with platform-based data workers, including how to define data work to then address important gaps in current ethical guidelines. The paper therefore offers concrete recommendations and regulations based on the most pressing challenges faced by these contributors. As the rapid development of AI continues to rely on the insight and labor of real people, it’s crucial to reflect on how research is conducted to ensure those workers receive proper acknowledgment for their role. This discussion paper calls for commitment to fair treatment, transparency, and meaningful support to make ethical data work a consistent part of the machine learning research process.

If you would like to learn more about the experiences and working conditions of these data workers, check out our blog post featuring creative projects from the Data Workers’ Inquiry!

New Publication: Extracting smartphone use from Android event log data

Back in October 2024, the Methods lab shared a preprint of a study by Methods Lab member and data scientist, Roland Toth, and former research fellow, Douglas Parry, exploring how to isolate meaningful measures of smartphone use from Android event log data. We’re now pleased to announce that this work has been peer-reviewed and published in the journal Computational Communication Research.

The article titled “Extracting Meaningful Measures of Smartphone Usage from Android Event Log Data: A Methodological Primer” outlines a practical and reproducible step-by-step guide for deriving objective indicators of human usage from raw mobile data, offering valuable insights for research in social science and related disciplines. It details the extraction of key usage metrics through written explanations, visual aids, and pseudo-code. The paper is a vital resource for researchers seeking to understand patterns of mobile phone engagement and its implications in today’s rapidly evolving digital environment.

Workshop: Introduction to MAXQDA

Join us for the workshop Introduction to MAXQDA, designed for all researchers, students, and professionals interested in qualitative data analysis. On May 28th, 2025, at the Weizenbaum Institute, certified MAXQDA trainer Dr. phil. Aikokul Maksutova will lead a basic yet comprehensive workshop introducing the software’s core features, aligning with the key stages of digital qualitative research.

This event will offer guidance on MAXQDA’s essential tools for documenting, coding, and analyzing qualitative data. Participants will become familiar with navigating the Code System and a range of additional features, such as functions for exporting data, linking memos, and generating visualizations. Each segment will include hands-on activities using various datasets, enabling participants to confidently apply the skills they’ve learned on their own.

To conclude, special guest and representative of MAXQDA, Ms. Tamara Pataki, will inform participants of the software’s latest innovations and host an open Q&A session.

To learn more, please visit our program page. We hope to see you there!