The baselines proposed for the ResPubliQA 2009 task are described in this paper. The main aim for
designing these baselines was to test the performance of a pure Information Retrieval approach on this
task. Two baselines were run for each of the eight languages of the task. Both baselines used the
Okapi-BM25 ranking function, with and without a stemming. In this paper we extend the previous
baselines comparing the BM25 model with Vector Space Model performance on this task. The results
prove that BM25 outperforms VSM for all cases.