Why EXIST?

Welcome to the website of EXIST 2026, the sixth edition of the sEXism Identification in Social neTworks task at CLEF 2026.

EXIST is a series of scientific events and shared tasks on sexism identification in social networks. EXIST aims to foster the automatic detection of sexism in a broad sense, from explicit misogyny to other subtle expressions that involve implicit sexist behaviours (EXIST 2021, EXIST 2022, EXIST 2023, EXIST 2024, EXIST 2025). The sixth edition of the EXIST shared task will be held as a Lab in CLEF 2026, on September 21-24, 2026, at Friedrich-Schiller-Universität Jena, Germany .

Sexism remains a pervasive form of social discrimination, reflected across multiple dimensions such as sexual violence, economic inequality, and online harassment. Recent data show that women represent around 85%-90% of sexual violence victims in the USA, Europe, Spain, and Australia. The gender pay gap continues to disadvantage women, who earn on average between 8.7% and 21.8% less than men across these same regions. In the digital sphere, women also experience disproportionate levels of harassment and discrimination, with reported rates ranging from 16% in the USA to 41% in Australia, compared to 5-26% for men. In this context, the development of AI systems capable of detecting sexism on social media presents a particularly relevant challenge. The perception of what constitutes sexist behavior or expression involves a certain degree of subjectivity, as it may be influenced by cultural norms, personal experiences, and emotional reactions that cannot be fully captured through linguistic data alone. Despite significant advances in computational modeling, the mechanisms underlying human decision-making remain only partially understood. Empirical evidence suggests that human judgments are shaped not only by conscious factors —such as socio-demographic background, prior experiences, and explicit beliefs—but also by unconscious cues, including emotions, physiological states, and sensory responses that subtly guide perception and evaluation. Current AI models, largely trained on textual or visual data, lack access to these deeper layers of cognitive and affective information, limiting their ability to replicate or interpret complex social phenomena. To bridge this gap, it becomes essential to explore new training paradigms that integrate human-centered and sensor-based data to provide richer insights into how individuals consciously and unconsciously perceive sexist content.

In EXIST 2026, we take a significant step forward by integrating the principles of Human-Centered AI (HCAI) into the development of automatic tools for detecting sexism online. Recognizing that no single interpretation can fully capture the diversity of human perception, we go beyond traditional annotation paradigms by combining Learning With Disagreement (LeWiDi) with sensor-based data (EEG, heart rate, and eye-tracking signals) collected from subjects exposed to potentially sexist content, with the aim of capturing unconscious responses to sexism. This dual approach represents a breakthrough in dataset creation for sensitive and value-laden tasks: for the first time, datasets will include not only divergent judgments from annotators, but also the embodied traces of how these content affect. This richer, multidimensional annotation process will enable the development of more inclusive, equitable, and socially aware AI systems for detecting sexism in complex multimedia formats like memes and short videos, where ambiguity and affect play a critical role.

In past editions, teams from over 50 countries submitted more than 1,700 runs, achieving remarkable outcomes, especially in the sexism detection task. However, there is still room for improvement, especially in when the problem is addressed under the LeWeDi paradigm in a multimedia context.

                               

Tasks

Building upon the EXIST 2025 dataset, this edition focuses exclusively on multimedia formats, comprising six experimental subtasks applied to images (memes) and videos (TikToks). Participants are challenged to address three main objectives: sexism identification (x.1), source intention detection (x.2), and sexism categorization (x.3).

A groundbreaking feature of this lab is the integration of Human-Centered AI principles. In the new experimental framework introduced in EXIST 2026, selected subjects were exposed to the multimedia content while their physiological and behavioral responses were continuously recorded. These multimodal signals (including eye tracking, heart rate, and EEG) enrich the traditional annotation labels, providing a deeper window into how users unconsciously process and react to sexist content in English and Spanish.

See the next sections for details and examples on each subtask (numbering is consistent with EXIST 2025).

Subtask 2.1: Sexism Identification in Memes

This is a binary classification subtask consisting on determining wheter a meme describes a sexist situation or criticizes a sexist behaviour), and classifying it into two categories: YES and NO. The following figures are some examples of both types of memes, respectively.

Sexist
(a) YES
Not sexist
(b) NO

Subtask 2.2: Source Intention in Memes

Once a message has been classified as sexist, the second subtask aims to categorize the meme according to the intention of the author, which provides insights in the role played by social networks on the emission and dissemination of sexist messages. Due to the characteristics of the memes, systems should only classify memes with DIRECT or JUDGEMENTAL labels.

  • DIRECT: the intention was to write a message that is sexist by itself or incites to be sexist.
  • JUDGEMENTAL: the intention was to judge, since the tweet describes sexist situations or behaviours with the aim of condemning them.

The following figures are some examples of them, respectively.

Direct
(a) Direct
Judgemental
(b) Judgemental

Subtask 2.3: Sexism Categorization in Memes

Many facets of a woman’s life may be the focus of sexist attitudes including domestic and parenting roles, career opportunities, sexual image, and life expectations, to name a few. Automatically detecting which of these facets of women are being more frequently attacked in social networks will facilitate the development of policies to fight against sexism. According to this, each sexist meme must be categorized in one or more of the following categories

  • IDEOLOGICAL AND INEQUALITY: The text discredits the feminist movement, rejects inequality between men and women, or presents men as victims of gender-based oppression.
  • STEREOTYPING AND DOMINANCE: The text expresses false ideas about women that suggest they are more suitable to fulfill certain roles (mother, wife, family caregiver, faithful, tender, loving, submissive, etc.), or inappropriate for certain tasks (driving, hardwork, etc), or claims that men are somehow superior to women.
  • OBJECTIFICATION: The text presents women as objects apart from their dignity and personal aspects, or assumes or describes certain physical qualities that women must have in order to fulfill traditional gender roles (compliance with beauty standards, hypersexualization of female attributes, women’s bodies at the disposal of men, etc.).
  • SEXUAL VIOLENCE: Sexual suggestions, requests for sexual favors or harassment of a sexual nature (rape or sexual assault) are made.
  • MISOGYNY AND NON-SEXUAL VIOLENCE: The text expressses hatred and violence towards women.

The following figures are some examples of categorized memes.

(a) Stereotyping

(e) Ideological

(c) Objectification

(d) Misogyny

(b) Sexual violence

Subtask 3.1: Sexism Identification in Videos

This subtask is the same subtask 2.1. The following figures are some examples of videos classified as YES or NO.

@cayleecresta #stitch with @goodbrobadbro easy should never be the word used to describe womanhood #fyp #foryou #foryoupage #womenempowerment #women #feminism ♬ original sound - Caylee Cresta
(a) YES
@dailyhealth2 #haha #kidnapped #bigredswifesarmy #oregon #victimcard #victimblaming #bodyguard #loved #smile #lagrandeoregon ♬ original sound - รⒶ︎я︎Ⓐ︎𝔥 ģⒶ︎เ︎ᒪ︎🫦
(b) NO

Subtask 3.2: Source Intention in Videos

This subtask replicates subtask 2.2 for memes, but it takes as source videos. The following examples are some videos representing each category.

@yourgirlhaylie #duet with @michaelkoz #sexist #foryou #FitCheck #throwhimaway ♬ original sound - Mike Koz
(a) Direct
@grandtheftangel remember it clearly #malegaze #feminism #objectification #womenempowerment #relatable ♬ original sound - 🖍
(b) Judgemental

Subtask 3.3: Sexism Categorization in Videos

This subtask aims to classify sexist videos according to the categorization provided for subtask 2.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE. The following figures are some examples of categorized videos.

@streaminfreedom I’m an idiot! @streaminfreedom #truestory #menvswomen #relationshipcomedy ♬ original sound - leanne_lou
(a) Stereotyping
@laanaintw #ViolenciaMachista #misoginia #patriarcado #91ColoursPullandBear #parati #hazmeviral ♬ sonido original - LaAnain Tw
(b) Ideological and Inequality
@zo3tv #duet with @lenatheplug #noJumper #dunked #in #theRight #goal #she #is #beautiful & #babygirl #isTo #swimsuit #never #gotTight #bodySnatched #congrats ♬ Aesthetic Girl - Yusei
(c) Objectification
@alt_acc393 IT'S A JOKEEEEE. #fyp #foryoupage #foryou ♬ original sound - alt acc
(d) Misogyny
@janetmild #niunamenos #noeslaropa #ylaculpanoeramia #violenciadegenero #violenciamachista ♬ sonido original - Yami Safdie
(e) Sexual violence

How to participate

If you want to participate in the EXIST 2026 shared task at CLEF 2026, please proceed to register for the lab at CLEF 2026 Labs Registration site. Once you have filled out the form, you will receive an email with information on how to join the EXIST 2026 Discord Forum, where EXIST-Datasets, EXIST-Communications, EXIST-Questions/Answers, and EXIST-Guidelines will be made available to participants. This is a manual process, so it might take some time. Please don’t worry, :-).

Participants will be required to submit their runs and will have the possibility to provide a technical report that should include a brief description of their approach, focusing on the adopted algorithms, models and resources, a summary of their experiments, and an analysis of the obtained results. Although we recommend to participate in all subtasks and in both languages, participants are allowed to participate just in one of them (e.g. subtask 2.1) and in one language (e.g. English).

Publications

Technical reports will be published in CLEF 2026 Proceedings at CEUR-WS.org.

Important dates

  • 17 November 2025: Registration opens.
  • 26 February 2026: Training set available.
  • 9 April 2026: Test set available.
  • 23 April 2026: Registration closes.
  • 7 May 2026: Runs submission due to organizers.
  • 28 May 2026: Results notification to participants.
  • 4 June 2026: Submission of Working Notes by participants.
  • 30 June 2026: Notification of acceptance (peer reviews).
  • 6 July 2026: Camera-ready participant papers due to organizers.
  • 21-24 September 2026: EXIST 2026 at CLEF Conference.

Note: All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”).

Dataset

Arriving shortly!

Organizers

Avatar

Damiano Spina

RMIT University

Senior Lecturer

Avatar

Iván Arcos

Universitat Politècnica de València

Researcher in Computational Linguistic

Avatar

Jorge Carrillo-de-Albornoz

UNED

RMIT University

Associate Professor

Avatar

Laura Plaza

UNED

RMIT University

Associate Professor

Avatar

Maria Aloy Mayo

UPV

Researcher in Computational Linguistic

Avatar

Paolo Rosso

Universitat Politècnica de València

Full Professor

Sponsors

Avatar

ANNOTATE Project

(PID2024-156022OB-C31, PID2024-156022OB-C32)

Spanish Ministry of Science, Innovation and Universities funded by MICIU/AEI/10.13039/501100011033 and the European Social Fund Plus (ESF+)

Avatar

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) (CE200100005)

RMIT University

Avatar

Pattern Recognition and Human Language Technologies (PRHLT) Research Center

Universitat Politècnica de València

Contact

For any question that concern the shared task, please write to Jorge Carrillo-de-Albornoz.

Related Work

Overviews previous LeWiDi EXIST editions:

Extended Overviews previous LeWiDi EXIST editions:

Working Notes previous LeWiDi EXIST editions:

Video and Meme related work

Sensor Data and NLP related work