Aproximación
Referencias
1. Campos, J. A. , A. Otegi, A. Soroa, J. Deriu, M. Cieliebak, E. Agirre. DoQA - Accessing Domain-Specific FAQs via Conversational QA. 58th meeting of the Association for Computational Linguistics (ACL 2020). 2020.2. Chen, D., Fisch, A., Weston, J., and Bordes, A. (2017). Reading Wikipedia to answer open-domain questions In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1870–1879.
3. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F. and Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
4. Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 4171-4186.
5. Fader, A., L. Zettlemoyer, and O. Etzioni. (2013). Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1608–1618.
6. Fanghong Jian, Jimmyxiangji Huang, Jiashu Zhao, Tingting He, Po Hu. "A simple enhancement for ad-hoc information retrieval via topic modelling." Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 2016.
7. Kratzwald, B., Eigenmann, A., and Feuerriegel, S. (2019). Rankqa: Neural question answering with answer re-ranking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6076-6085.
8. Lample, G. and Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291
9. Lan Z., Chen M., Goodman S., Gimpel K., Sharma P., and Soricut R. (2020). Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations (ICLR).
10. Lewis, P., Oğuz B., Rinott R., Riedel S., Schwenk H. (2020) MLQA: Evaluating Cross-lingual Extractive Question Answering. Proceedings of ACL 2020.
11. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019b). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arxiv:1907.11692
12. Liu J., Lin Y., Liu Z., Sun M. 2019. XQA: A Cross-lingual Open-domain Question Answering Dataset. Proceedings of ACL 2019.
13. Nogueira, Rodrigo, and Kyunghyun Cho. "Passage Re-ranking with BERT." arXiv preprint arXiv:1901.04085 (2019)
14. Otegi, A., Agirre A., Campos JA., Soroa A. and Agirre E. (2020) Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque
15. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. In NAACL.
16. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250
17. Rajpurkar P., Jia R. and Liang P. (2018). Know What You Don’t Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pp 784–789.
18. Rogers, A., Kovaleva, O., Downey, M., and Rumshisky, A. (2020). Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks. Proceedings of AAAI.
19. Yang, Z., Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. (2019). XLNet: Generalized Autoregressive Pre-training for Language Understanding. In 33rd Conference on Neural Information Processing Systems (NeurIPS).
20. Wang, S., Yu, M., Guo, X., Wang, Z., Klinger, T., Zhang, W. and Jiang, J. (2018). Reinforced reader-ranker for open-domain question answering. In Association for the Advancement of Artificial Intelligence (AAAI).