Welcome to the website of EXIST 2026, the sixth edition of the sEXism Identification in Social neTworks task at CLEF 2026.
EXIST is a series of scientific events and shared tasks on sexism identification in social networks. EXIST aims to foster the automatic detection of sexism in a broad sense, from explicit misogyny to other subtle expressions that involve implicit sexist behaviours (EXIST 2021, EXIST 2022, EXIST 2023, EXIST 2024, EXIST 2025). The sixth edition of the EXIST shared task will be held as a Lab in CLEF 2026, on September 21-24, 2026, at Friedrich-Schiller-Universität Jena, Germany .
Sexism remains a pervasive form of social discrimination, reflected across multiple dimensions such as sexual violence, economic inequality, and online harassment. Recent data show that women represent around 85%-90% of sexual violence victims in the USA, Europe, Spain, and Australia. The gender pay gap continues to disadvantage women, who earn on average between 8.7% and 21.8% less than men across these same regions. In the digital sphere, women also experience disproportionate levels of harassment and discrimination, with reported rates ranging from 16% in the USA to 41% in Australia, compared to 5-26% for men. In this context, the development of AI systems capable of detecting sexism on social media presents a particularly relevant challenge. The perception of what constitutes sexist behavior or expression involves a certain degree of subjectivity, as it may be influenced by cultural norms, personal experiences, and emotional reactions that cannot be fully captured through linguistic data alone. Despite significant advances in computational modeling, the mechanisms underlying human decision-making remain only partially understood. Empirical evidence suggests that human judgments are shaped not only by conscious factors —such as socio-demographic background, prior experiences, and explicit beliefs—but also by unconscious cues, including emotions, physiological states, and sensory responses that subtly guide perception and evaluation. Current AI models, largely trained on textual or visual data, lack access to these deeper layers of cognitive and affective information, limiting their ability to replicate or interpret complex social phenomena. To bridge this gap, it becomes essential to explore new training paradigms that integrate human-centered and sensor-based data to provide richer insights into how individuals consciously and unconsciously perceive sexist content.
In EXIST 2026, we take a significant step forward by integrating the principles of Human-Centered AI (HCAI) into the development of automatic tools for detecting sexism online. Recognizing that no single interpretation can fully capture the diversity of human perception, we go beyond traditional annotation paradigms by combining Learning With Disagreement (LeWiDi) with sensor-based data (EEG, heart rate, and eye-tracking signals) collected from subjects exposed to potentially sexist content, with the aim of capturing unconscious responses to sexism. This dual approach represents a breakthrough in dataset creation for sensitive and value-laden tasks: for the first time, datasets will include not only divergent judgments from annotators, but also the embodied traces of how these content affect. This richer, multidimensional annotation process will enable the development of more inclusive, equitable, and socially aware AI systems for detecting sexism in complex multimedia formats like memes and short videos, where ambiguity and affect play a critical role.
In past editions, teams from over 50 countries submitted more than 1,700 runs, achieving remarkable outcomes, especially in the sexism detection task. However, there is still room for improvement, especially in when the problem is addressed under the LeWeDi paradigm in a multimedia context.
|
|
|
Building upon the EXIST 2025 dataset, this edition focuses exclusively on multimedia formats, comprising six experimental subtasks applied to images (memes) and videos (TikToks). Participants are challenged to address three main objectives: sexism identification (x.1), source intention detection (x.2), and sexism categorization (x.3).
A groundbreaking feature of this lab is the integration of Human-Centered AI principles. In the new experimental framework introduced in EXIST 2026, selected subjects were exposed to the multimedia content while their physiological and behavioral responses were continuously recorded. These multimodal signals (including eye tracking, heart rate, and EEG) enrich the traditional annotation labels, providing a deeper window into how users unconsciously process and react to sexist content in English and Spanish.
See the next sections for details and examples on each subtask (numbering is consistent with EXIST 2025).
This is a binary classification subtask consisting on determining wheter a meme describes a sexist situation or criticizes a sexist behaviour), and classifying it into two categories: YES and NO. The following figures are some examples of both types of memes, respectively.
Once a message has been classified as sexist, the second subtask aims to categorize the meme according to the intention of the author, which provides insights in the role played by social networks on the emission and dissemination of sexist messages. Due to the characteristics of the memes, systems should only classify memes with DIRECT or JUDGEMENTAL labels.
The following figures are some examples of them, respectively.
Many facets of a woman’s life may be the focus of sexist attitudes including domestic and parenting roles, career opportunities, sexual image, and life expectations, to name a few. Automatically detecting which of these facets of women are being more frequently attacked in social networks will facilitate the development of policies to fight against sexism. According to this, each sexist meme must be categorized in one or more of the following categories
The following figures are some examples of categorized memes.
(a) Stereotyping
(e) Ideological
(c) Objectification
(d) Misogyny
(b) Sexual violence
This subtask is the same subtask 2.1. The following figures are some examples of videos classified as YES or NO.
@cayleecresta #stitch with @goodbrobadbro easy should never be the word used to describe womanhood #fyp #foryou #foryoupage #womenempowerment #women #feminism ♬ original sound - Caylee Cresta
@dailyhealth2 #haha #kidnapped #bigredswifesarmy #oregon #victimcard #victimblaming #bodyguard #loved #smile #lagrandeoregon ♬ original sound - รⒶ︎я︎Ⓐ︎𝔥 ģⒶ︎เ︎ᒪ︎🫦
This subtask replicates subtask 2.2 for memes, but it takes as source videos. The following examples are some videos representing each category.
@yourgirlhaylie #duet with @michaelkoz #sexist #foryou #FitCheck #throwhimaway ♬ original sound - Mike Koz
@grandtheftangel remember it clearly #malegaze #feminism #objectification #womenempowerment #relatable ♬ original sound - 🖍
This subtask aims to classify sexist videos according to the categorization provided for subtask 2.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE. The following figures are some examples of categorized videos.
@streaminfreedom I’m an idiot! @streaminfreedom #truestory #menvswomen #relationshipcomedy ♬ original sound - leanne_lou
@laanaintw #ViolenciaMachista #misoginia #patriarcado #91ColoursPullandBear #parati #hazmeviral ♬ sonido original - LaAnain Tw
@zo3tv #duet with @lenatheplug #noJumper #dunked #in #theRight #goal #she #is #beautiful & #babygirl #isTo #swimsuit #never #gotTight #bodySnatched #congrats ♬ Aesthetic Girl - Yusei
@alt_acc393 IT'S A JOKEEEEE. #fyp #foryoupage #foryou ♬ original sound - alt acc
@janetmild #niunamenos #noeslaropa #ylaculpanoeramia #violenciadegenero #violenciamachista ♬ sonido original - Yami Safdie
If you want to participate in the EXIST 2026 shared task at CLEF 2026, please proceed to register for the lab at CLEF 2026 Labs Registration site. Once you have filled out the form, you will receive an email with information on how to join the EXIST 2026 Discord Forum, where EXIST-Datasets, EXIST-Communications, EXIST-Questions/Answers, and EXIST-Guidelines will be made available to participants. This is a manual process, so it might take some time. Please don’t worry, :-).
Participants will be required to submit their runs and will have the possibility to provide a technical report that should include a brief description of their approach, focusing on the adopted algorithms, models and resources, a summary of their experiments, and an analysis of the obtained results. Although we recommend to participate in all subtasks and in both languages, participants are allowed to participate just in one of them (e.g. subtask 2.1) and in one language (e.g. English).
Technical reports will be published in CLEF 2026 Proceedings at CEUR-WS.org.
Note: All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”).
Arriving shortly!
For any question that concern the shared task, please write to Jorge Carrillo-de-Albornoz.