iCLEF is the interactive track of CLEF (Cross-Language Evaluation Forum), an annual evaluation exercise for Multilingual Information Access systems. in iCLEF, Cross-Language search capabilities are studied from a user-inclusive perspective; the question is how best to assist users when searching information written in unknown languages, rather than how best an algorithm can find information written in languages different from the query language.
In the past, iCLEF has addressed applications such as information retrieval and question answering. However, for 2006 the focus has turned to text-based image retrieval from Flickr.
This year we want to explore user behaviour in a collection where the cross-language search necessity arises more naturally for average users. We have chosen Flickr, a large-scale, web-based image database based on a large social network of WWW users, with the potential for offering both challenging and realistic multilingual search tasks for interactive experiments.
We want to use the iCLEF track to explore alternative evaluation methodologies for interactive IR; for this reason, we have decided to fix the search tasks, but keep the evaluation methodology open, so that each participant can contribute with their own ideas about how to study interactive issues in CLIR.In this experimental setting we thus provide the realistic cross-language search scenario and example tasks, and explore evaluation methodologies for a better understanding of search processes there.
We welcome research teams with interests in Cross-Language IR, Human-Computer Interaction, Interactive Information Access, Image Retrieval, and Machine Translation. The organizers will foster synergies between interested parties; expertise in all of these fields is not required to participate.
Users will be allowed to employ 20 minutes for each task, to get around 100 minutes overall searching time (including training), to avoid fatigue effects.
Besides these three core tasks, which all participants must undertake, we allow for any additional tasks and variants that participants may think of. For instance, participants may offer a web search facility using the flickr API and study the interaction logs with real users posing real search needs.
Please contact iCLEF organization (firstname.lastname@example.org) to get the code and assistance on how to use it.
As the evaluation methodology is not fixed, there won't be centralized assessments for iCLEF this year. We will provide a questionnaire for participants intented to have an homogeneous description of all experiments and experimental designs.
Participants are encouraged to log as many details as possible about every search session. A skeleton questionnaire will be provided to collect some of the evaluation metrics
|Task finalization and paper for working notes||August 15|
|Workshop (Alicante)||20-22 September 2006|