Why EXIST?

Welcome to the website of EXIST 2025, the fifth edition of the sEXism Identification in Social neTworks task at CLEF 2025.

EXIST is a series of scientific events and shared tasks on sexism identification in social networks. EXIST aims to capture sexism in a broad sense, from explicit misogyny to other subtle expressions that involve implicit sexist behaviours (EXIST 2021, EXIST 2022, EXIST 2023, EXIST 2024). The fifth edition of the EXIST shared task will be held as a Lab in CLEF 2025, on September 9-12, 2025, in UNED, Madrid, Spain.

Social Networks are the main platforms for social complaint, activism, etc. Movements like #MeTwoo, #8M or #Time’sUp have spread rapidly. Under the umbrella of social networks, many women all around the world have reported abuses, discriminations and other sexist experiences suffered in real life. Social networks are also contributing to the transmission of sexism and other disrespectful and hateful behaviours. In this context, automatic tools not only may help to detect and alert against sexism behaviours and discourses, but also to estimate how often sexist and abusive situations are found in social media platforms, what forms of sexism are more frequent and how sexism is expressed in these media. This Lab will contribute to developing applications to detect sexism.

In 2024 the EXIST campaing included multimedia content in the format of memes, steping forward research on more robust techniques to identify sexism in social networks. Following this line, this year we will focus on TikTok videos in the challenge including in the dataset the three more important sources of sexism spreading: text, images and videos. Sexism on TikTok is also a growing concern as the platform’s algorithm often amplifies content that perpetuates gender stereotypes and internalized misogyny. Consequently, it is essential to develop automated multimodal tools capable of detecting sexism in text, images, and videos, to raise alarms or automatically remove such content from social networks. This lab will contribute to the creation of applications that identify sexist content in social media across all three formats.

Similar to the approach in the 2023 and 2024 edition, this edition will also embrace the Learning With Disagreement (LeWiDi) paradigm for both the development of the dataset and the evaluation of the systems. The LeWiDi paradigm doesn’t rely on a single “correct” label for each example. Instead, the model is trained to handle and learn from conflicting or diverse annotations. This enables the system to consider various annotators’ perspectives, biases, or interpretations, resulting in a fairer learning process.

In previous editions, 223 teams from more than 50 countries submitted their results achieving impressive results, especially in the sexism detection task. However, there is still room for improvement, especially in when the problem is addressed under the LeWeDi paradigm in a multimedia context.

                               

Tasks

Participants will be asked to identify and characterize sexism in social networks according to different sources: This year the lab comprises nine subtasks in two languages, English and Spanish, which are the same three tasks (sexism identification, source intention detection, and sexism categorization) applied to three different types of data: text (tweets), image (memes) and video (TikToks). This multimedia approach will help identify trends and patterns in sexism across media formats and user interactions, contributing to a deeper understanding of the social dynamics. Also, approches submitted to all tasks will be evaluated to analyze their capacity to detect sexism in a multimodal source.

A condense schema of all tasks included this year in the lab is presented in the following table:

For a more detailed description of each subtask, as well as some examples, check the next sections.

Subtask 1.1: Sexism Identification in Tweets

The first subtask is a binary classification. The systems have to decide whether or not a given tweet contains sexist expressions or behaviours (i.e., it is sexist itself, describes a sexist situation or criticizes a sexist behaviour), and classify it according to two categories: YES and NO.

  • YES:
    • “Mujer al volante, tenga cuidado!”
    • “People really try to convince women with little to no ass that they should go out and buy a body. Like bih, I don’t need a fat ass to get a man. Never have.”
  • NO:
    • “Alguien me explica que zorra hace la gente en el cajero que se demora tanto.”
    • "@messyworldorder it’s honestly so embarrassing to watch and they’ll be like “not all white women are like that”"

Subtask 1.2: Source Intention in Tweets

Once a message has been classified as sexist, the second subtask aims to categorize the message according to the intention of the author, which provides insights in the role played by social networks on the emission and dissemination of sexist messages. In this subtask, we propose a ternary classification task:

  • DIRECT: the intention was to write a message that is sexist by itself or incites to be sexist, as in:

    • A woman needs love, to fill the fridge, if a man can give this to her in return for her services (housework, cooking, etc), I don’t see what else she needs.
    • Women shouldn’t code…perhaps be influencer/creator instead…it’s their natural strength.
  • REPORTED: the intention is to report and share a sexist situation suffered by a woman or women in first or third person, as in:

    • I doze in the subway, I open my eyes feeling something weird: the hand of the man sat next to me on my leg #SquealOnYourPig.
    • Today, one of my year 1 class pupils could not believe he’d lost a race against a girl.
  • JUDGEMENTAL: the intention was to judge, since the tweet describes sexist situations or behaviours with the aim of condemning them.

    • As usual, the woman was the one quitting her job for the family’s welfare…
    • 21st century and we are still earning 25% less than men #Idonotrenounce.

Subtask 1.3: Sexism Categorization in Tweets

Many facets of a woman’s life may be the focus of sexist attitudes including domestic and parenting roles, career opportunities, sexual image, and life expectations, to name a few. Automatically detecting which of these facets of women are being more frequently attacked in social networks will facilitate the development of policies to fight against sexism. According to this, each sexist tweet must be categorized in one or more of the following categories

  • IDEOLOGICAL AND INEQUALITY: The text discredits the feminist movement, rejects inequality between men and women, or presents men as victims of gender-based oppression.

    • “Mi hermana y mi madre se burlan de mí por defender todo el tiempo los derechos de todos y me acaban de decir feminazi, la completaron”.
    • “I think the whole equality thing is getting out of hand. We are different, thats how were made!”.
  • STEREOTYPING AND DOMINANCE: The text expresses false ideas about women that suggest they are more suitable to fulfill certain roles (mother, wife, family caregiver, faithful, tender, loving, submissive, etc.), or inappropriate for certain tasks (driving, hardwork, etc), or claims that men are somehow superior to women.

    • “@Paula2R @faber_acuria A las mujeres hay que amarlas…solo eso… Nunca las entenderás.”.
    • “Most women no longer have the desire or the knowledge to develop a high quality character, even if they wanted to.”.
  • OBJECTIFICATION: The text presents women as objects apart from their dignity and personal aspects, or assumes or describes certain physical qualities that women must have in order to fulfill traditional gender roles (compliance with beauty standards, hypersexualization of female attributes, women’s bodies at the disposal of men, etc.).

    • ““Pareces una puta con ese pantalón” - Mi hermano de 13 cuando me vio con un pantalón de cuero”.
    • “Don’t get married than blame all woman for your poor investment. You should of got a hooker but instead you choose to go get a wedding ring.”.
  • SEXUAL VIOLENCE: Sexual suggestions, requests for sexual favors or harassment of a sexual nature (rape or sexual assault) are made.

    • “#MeToo Estas 4 no han conseguido su objetivo.El juez estima que se abrieron de patas https://t.co/GSHiiwqY6Aánta lagartona hay en este \metoo"!👇🏻👇🏻🤔🤔🤔 https://t.co/8t5VmFIUFn"
    • “fuck that cunt, I would with my fist”.
  • MISOGYNY AND NON-SEXUAL VIOLENCE: The text expressses hatred and violence towards women.

    • “Las mujeres de hoy en dia te enseñar a querer… estar soltero”
    • “Some woman are so toxic they don’t even know they are draining everyone around them in poison. If you lack self awareness you won’t even notice how toxic you really are”.

Subtask 2.1: Sexism Identification in Memes

This is a binary classification subtask consisting on determine wheter a meme describes a sexist situation or criticizes a sexist behaviour), and classify it according to two categories: YES and NO. The following figures are some examples of both types of memes, respectively.

Sexist
(a) YES
Not sexist
(b) NO

Subtask 2.2: Source Intention in Memes

As in subtask 1.2, this subtask aims to categorize the meme according to the intention of the author, which provides insights in the role played by social networks on the emission and dissemination of sexist messages. Due to the characteristics of the memes, the REPORTED label is virtually null, so in this task systems should only classify memes with DIRECT or JUDGEMENTAL labels. The following figures are some examples of them, respectively.

Direct
(a) Direct
Judgemental
(b) Judgemental

Subtask 2.3: Sexism Categorization in Memes

This task aims to classify sexist memes according to the categorization provided for subtask 1.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE. The following figures are some examples of categorized memes.

(a) Stereotyping

(e) Ideological

(c) Objectification

(d) Misogyny

(b) Sexual violence

Subtask 3.1: Sexism Identification in Videos

This subtask is the same subtask as subtasks 1.1 and 2.1. The following figures are some examples of videos classified as YES or NO.

@cayleecresta #stitch with @goodbrobadbro easy should never be the word used to describe womanhood #fyp #foryou #foryoupage #womenempowerment #women #feminism ♬ original sound - Caylee Cresta
(a) YES
@dailyhealth2 #haha #kidnapped #bigredswifesarmy #oregon #victimcard #victimblaming #bodyguard #loved #smile #lagrandeoregon ♬ original sound - รⒶ︎я︎Ⓐ︎𝔥 ģⒶ︎เ︎ᒪ︎🫦
(b) NO

Subtask 3.2: Source Intention in Videos

This subtask replicates the subtask 2.2 for memes, but it takes as source videos. The following examples are some videos representing each category.

@yourgirlhaylie #duet with @michaelkoz #sexist #foryou #FitCheck #throwhimaway ♬ original sound - Mike Koz
(a) Direct
@zantyoo #womenpower #humiliation #power #womencant #womencantoo #womencan ♬ original sound - Amizan Words
(b) Judgemental

Subtask 3.3: Sexism Categorization in Videos

This subtask aims to classify sexist memes according to the categorization provided for Task 1.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE. The following figures are some examples of categorized videos.

@streaminfreedom I’m an idiot! @streaminfreedom #truestory #menvswomen #relationshipcomedy ♬ original sound - leanne_lou
(a) Stereotyping
@itslindobaby I’m getting so use to this now 😒 can people just like me for my music? #golddigger #rapper #hiphop #golddiggerprank ♬ original sound - Lindo
(b) Ideological and Dominance
@zo3tv #duet with @lenatheplug #noJumper #dunked #in #theRight #goal #she #is #beautiful & #babygirl #isTo #swimsuit #never #gotTight #bodySnatched #congrats ♬ Aesthetic Girl - Yusei
(c) Objectification
@alt_acc393 IT'S A JOKEEEEE. #fyp #foryoupage #foryou ♬ original sound - alt acc
(d) Misogyny
@caitlinnrowe_ proud of adelaide today 🤍#justicforwomen #saraheverard #notallmen #fyp #protest #adelaide #southaustralia #australia #foryoupage ♬ THISISNOTMYREMIX - Thewizardliz
(e) Sexual violence

How to participate

To be announced!

Important dates

  • 18 November 2024: Registration open.
  • 3 February 2025: Training and development sets available.
  • 7 April 2025: Test set available.
  • 25 April 2025: Registration closes.
  • 18 May 2025: Runs submission due to organizers.
  • 8 June 2025: Results notification to participants.
  • 15 June 2025: Submission of Working Notes by participants.
  • 29 June 2025: Notification of acceptance (peer reviews).
  • 7 July 2025: Camera-ready participant papers due to organizers.
  • 9-12 September 2025: EXIST 2025 at CLEF Conference.

Note: All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”).

Organizers

Avatar

Damiano Spina

RMIT University

Senior Lecturer

Avatar

Enrique Amigó

UNED

Associate Professor

Avatar

Iván Arcos

Universitat Politècnica de València

Researcher in Computational Linguistic

Avatar

Jorge Carrillo-de-Albornoz

UNED

RMIT University

Associate Professor

Avatar

Julio Gonzalo

UNED

Full Professor

Avatar

Laura Plaza

UNED

RMIT University

Associate Professor

Avatar

Paolo Rosso

Universitat Politècnica de València

Full Professor

Avatar

Roser Morante

UNED

Researcher in Computational Linguistic

Sponsors

Avatar

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) (CE200100005)

RMIT University

Avatar

FairTransNLP Project

(PID2021-124361OB-C31 and PID2021-124361OB-C32)

Spanish Ministry of Science and Innovation

Avatar

Pattern Recognition and Human Language Technologies (PRHLT) Research Center

Universitat Politècnica de València

Contact

For any question that concern the shared task, please write to Jorge Carrillo-de-Albornoz.