Nicola Ferro is an Associate Professor of Computer Science at the University of Padua, Italy. His research interests include information retrieval, its experimental evaluation, multilingual information access and digital libraries. He is the coordinator of the CLEF evaluation initiative, which includes more than 200 research groups around the globe involved in large-scale IR evaluation activities. He was also the coordinator of the EU Seventh Framework Programme Network of Excellence PROMISE on information retrieval evaluation.
Carol Peters, now Research Associate, was a Researcher at the Italian National Research Council's "Istituto di Scienza e Tecnologie dell'Informazione." Her main research activities focused on the development of multilingual access mechanisms for digital libraries and evaluation methodologies for cross-language information retrieval systems. She was leader of the EU Sixth Framework MultiMatch project, and coordinated the Cross-Language Evaluation Forum (CLEF) during its first ten years of activity. In 2009, in recognition of her work for CLEF, she was awarded the Tony Kent Strix Award.
From Multilingual to Multimodal: The Evolution of CLEF over Two Decades.- The Evolution of Cranfield.- How to Run an Evaluation Task.- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections.- TIRA Integrated Research Architecture.- EaaS: Evaluation-as-a-Service and Experiences from the VISCERAL Project.- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF.- The Challenges of Language Variation in Information Access.- Multi-lingual Retrieval of Pictures in ImageCLEF.- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks.- Automatic Image Annotation at ImageCLEF.- Image Retrieval Evaluation in Specific Domains.- 'Bout Sound and Vision: CLEF beyond Text Retrieval Tasks.- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017.- Multilingual Patent Text Retrieval Evaluation: CLEF-IP.- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation.- From XML Retrieval to Semantic Search and Beyond.- Results and Lessons of the Question Answering Track at CLEF.- Evolution of the PAN Lab on Digital Text Forensics.- RepLab: an Evaluation Campaign for Online Monitoring Systems.- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs.- The Scholarly Impact of CLEF 2010-2017.- Reproducibility and Validity in CLEF.- Visual Analytics and IR Experimental Evaluation.- Adopting Systematic Evaluation Benchmarks in Operational Settings.