Normal view MARC view ISBD view

Multilingual Information Access Evaluation I - Text Retrieval Experiments [electronic resource] : 10th Workshop of the Cross-Language Evaluation Forum, CLEF 2009, Corfu, Greece, September 30 - October 2, 2009, Revised Selected Papers, Part I / edited by Carol Peters, Giorgio Maria Di Nunzio, Mikko Kurimo, Thomas Mandl, Djamel Mostefa, Anselmo Penas, Giovanna Roda.

Contributor(s): Peters, Carol [editor.] | Di Nunzio, Giorgio Maria [editor.] | Kurimo, Mikko [editor.] | Mandl, Thomas [editor.] | Mostefa, Djamel [editor.] | Penas, Anselmo [editor.] | Roda, Giovanna [editor.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Information Systems and Applications, incl. Internet/Web, and HCI: 6241Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 2010Edition: 1st ed. 2010.Description: XXV, 416 p. 110 illus. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783642157547.Subject(s): Natural language processing (Computer science) | Application software | Information storage and retrieval systems | Database management | Data mining | Pattern recognition systems | Natural Language Processing (NLP) | Computer and Information Systems Applications | Information Storage and Retrieval | Database Management | Data Mining and Knowledge Discovery | Automated Pattern RecognitionAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 006.35 Online resources: Click here to access online
Contents:
What Happened in CLEF 2009 -- What Happened in CLEF 2009 -- I: Multilingual Textual Document Retrieval (AdHoc) -- CLEF 2009 Ad Hoc Track Overview: TEL and Persian Tasks -- CLEF 2009 Ad Hoc Track Overview: Robust-WSD Task -- Evaluating Cross-Language Explicit Semantic Analysis and Cross Querying -- Document Expansion, Query Translation and Language Modeling for Ad-Hoc IR -- Smoothing Methods and Cross-Language Document Re-ranking -- Cross-Language Information Retrieval Using Meta-language Index Construction and Structural Queries -- Sampling Precision to Depth 10000 at CLEF 2009 -- Multilingual Query Expansion for CLEF Adhoc-TEL -- Experiments with N-Gram Prefixes on a Multinomial Language Model versus Lucene's Off-the-Shelf Ranking Scheme and Rocchio Query Expansion (TEL@CLEF Monolingual Task) -- Evaluation of Perstem: A Simple and Efficient Stemming Algorithm for Persian -- Ad Hoc Retrieval with the Persian Language -- Ad Hoc Information Retrieval for Persian -- Combining Probabilistic and Translation-Based Models for Information Retrieval Based on Word Sense Annotations -- Indexing with WordNet Synonyms May Improve Retrieval Results -- UFRGS@CLEF2009: Retrieval by Numbers -- Evaluation of Axiomatic Approaches to Crosslanguage Retrieval -- UNIBA-SENSE @ CLEF 2009: Robust WSD Task -- Using WordNet Relations and Semantic Classes in Information Retrieval Tasks -- Using Semantic Relatedness and Word Sense Disambiguation for (CL)IR -- II: Multiple Language Question Answering (QA@CLEF) -- Overview of ResPubliQA 2009: Question Answering Evaluation over European Legislation -- Overview of QAST 2009 -- GikiCLEF: Expectations and Lessons Learned -- NLEL-MAAT at ResPubliQA -- Question Answering on English and Romanian Languages -- Studying Syntactic Analysis in a QA System: FIDJI @ ResPubliQA'09.-Approaching Question Answering by Means of Paragraph Validation -- Information Retrieval Baselines for the ResPubliQA Task -- A Trainable Multi-factored QA System -- Extending a Logic-Based Question Answering System for Administrative Texts -- Elhuyar-IXA: Semantic Relatedness and Cross-Lingual Passage Retrieval -- Are Passages Enough? The MIRACLE Team Participation in QA@CLEF2009 -- The LIMSI Participation in the QAst 2009 Track: Experimenting on Answer Scoring -- Robust Question Answering for Speech Transcripts: UPC Experience in QAst 2009 -- Where in the Wikipedia Is That Answer? The XLDB at the GikiCLEF 2009 Task -- Recursive Question Decomposition for Answering Complex Geographic Questions -- GikiCLEF Topics and Wikipedia Articles: Did They Blend? -- TALP at GikiCLEF 2009 -- Semantic QA for Encyclopaedic Questions: EQUAL in GikiCLEF -- Interactive Probabilistic Search for GikiCLEF -- III: Multilingual Information Filtering (INFILE) -- Information Filtering Evaluation: Overview of CLEF 2009 INFILE Track -- Batch Document Filtering Using Nearest Neighbor Algorithm -- UAIC: Participation in INFILE@CLEF Task -- Multilingual Information Filtering by Human Plausible Reasoning -- Hossur'Tech's Participation in CLEF 2009 INFILE Interactive Filtering -- Experiments with Google News for Filtering Newswire Articles -- IV: Intellectual Property (CLEF-IP) -- CLEF-IP 2009: Retrieval Experiments in the Intellectual Property Domain -- Exploring Structured Documents and Query Formulation Techniques for Patent Retrieval -- Formulating Good Queries for Prior Art Search -- UAIC: Participation in CLEF-IP Track -- PATATRAS: Retrieval Model Combination and Regression Models for Prior Art Search -- NLEL-MAAT at CLEF-IP -- Simple Pre and Post Processing Strategies for Patent Searching in CLEF IntellectualProperty Track 2009 -- Prior Art Search Using International Patent Classification Codes and All-Claims-Queries -- UTA and SICS at CLEF-IP'09 -- Searching CLEF-IP by Strategy -- UniNE at CLEF-IP 2009 -- Automatically Generating Queries for Prior Art Search -- Patent Retrieval Experiments in the Context of the CLEF IP Track 2009 -- Prior Art Retrieval Using the Claims Section as a Bag of Words -- UniGE Experiments on Prior Art Search in the Field of Patents -- V: Logfile Analysis (LogCLEF) -- LogCLEF 2009: The CLEF 2009 Multilingual Logfile Analysis Track Overview -- Identifying Common User Behaviour in Multilingual Search Logs -- A Search Engine Based on Query Logs, and Search Log Analysis by Automatic Language Identification -- Identifying Geographical Entities in Users' Queries -- Search Path Visualization and Session Performance Evaluation with Log Files -- User Logs as a Means to Enrich and Refine Translation Dictionaries -- VI: Grid Experiments (GRID@CLEF) -- CLEF 2009: Grid@CLEF Pilot Track Overview -- Decomposing Text Processing for Retrieval: Cheshire Tries GRID@CLEF -- Putting It All Together: The Xtrieval Framework at Grid@CLEF 2009 -- VII: Morphochallenge -- Overview and Results of Morpho Challenge 2009 -- MorphoNet: Exploring the Use of Community Structure for Unsupervised Morpheme Analysis -- Unsupervised Morpheme Analysis with Allomorfessor -- Unsupervised Morphological Analysis by Formal Analogy -- Unsupervised Word Decomposition with the Promodes Algorithm -- Unsupervised Morpheme Discovery with Ungrade -- Clustering Morphological Paradigms Using Syntactic Categories -- Simulating Morphological Analyzers with Stochastic Taggers for Confidence Estimation -- A Rule-Based Acquisition Model Adapted for Morphological Analysis -- Morphological Analysis by Multiple Sequence Alignment.
In: Springer Nature eBookSummary: The tenth campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2009. There were eight main eval- tion tracks in CLEF 2009 plus a pilot task. The aim, as usual, was to test the perfo- ance of a wide range of multilingual information access (MLIA) systems or system components. This year, about 150 groups, mainly but not only from academia, reg- tered to participate in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia. The results were presented at a two-and-a-half day workshop held in Corfu, Greece, September 30 to October 2, 2009, in conjunction with the European Conference on Digital Libraries. The workshop, attended by 160 researchers and system developers, provided the opportunity for all the groups that had participated in the evaluation campaign to get together, compare approaches and exchange ideas.
    average rating: 0.0 (0 votes)
No physical items for this record

What Happened in CLEF 2009 -- What Happened in CLEF 2009 -- I: Multilingual Textual Document Retrieval (AdHoc) -- CLEF 2009 Ad Hoc Track Overview: TEL and Persian Tasks -- CLEF 2009 Ad Hoc Track Overview: Robust-WSD Task -- Evaluating Cross-Language Explicit Semantic Analysis and Cross Querying -- Document Expansion, Query Translation and Language Modeling for Ad-Hoc IR -- Smoothing Methods and Cross-Language Document Re-ranking -- Cross-Language Information Retrieval Using Meta-language Index Construction and Structural Queries -- Sampling Precision to Depth 10000 at CLEF 2009 -- Multilingual Query Expansion for CLEF Adhoc-TEL -- Experiments with N-Gram Prefixes on a Multinomial Language Model versus Lucene's Off-the-Shelf Ranking Scheme and Rocchio Query Expansion (TEL@CLEF Monolingual Task) -- Evaluation of Perstem: A Simple and Efficient Stemming Algorithm for Persian -- Ad Hoc Retrieval with the Persian Language -- Ad Hoc Information Retrieval for Persian -- Combining Probabilistic and Translation-Based Models for Information Retrieval Based on Word Sense Annotations -- Indexing with WordNet Synonyms May Improve Retrieval Results -- UFRGS@CLEF2009: Retrieval by Numbers -- Evaluation of Axiomatic Approaches to Crosslanguage Retrieval -- UNIBA-SENSE @ CLEF 2009: Robust WSD Task -- Using WordNet Relations and Semantic Classes in Information Retrieval Tasks -- Using Semantic Relatedness and Word Sense Disambiguation for (CL)IR -- II: Multiple Language Question Answering (QA@CLEF) -- Overview of ResPubliQA 2009: Question Answering Evaluation over European Legislation -- Overview of QAST 2009 -- GikiCLEF: Expectations and Lessons Learned -- NLEL-MAAT at ResPubliQA -- Question Answering on English and Romanian Languages -- Studying Syntactic Analysis in a QA System: FIDJI @ ResPubliQA'09.-Approaching Question Answering by Means of Paragraph Validation -- Information Retrieval Baselines for the ResPubliQA Task -- A Trainable Multi-factored QA System -- Extending a Logic-Based Question Answering System for Administrative Texts -- Elhuyar-IXA: Semantic Relatedness and Cross-Lingual Passage Retrieval -- Are Passages Enough? The MIRACLE Team Participation in QA@CLEF2009 -- The LIMSI Participation in the QAst 2009 Track: Experimenting on Answer Scoring -- Robust Question Answering for Speech Transcripts: UPC Experience in QAst 2009 -- Where in the Wikipedia Is That Answer? The XLDB at the GikiCLEF 2009 Task -- Recursive Question Decomposition for Answering Complex Geographic Questions -- GikiCLEF Topics and Wikipedia Articles: Did They Blend? -- TALP at GikiCLEF 2009 -- Semantic QA for Encyclopaedic Questions: EQUAL in GikiCLEF -- Interactive Probabilistic Search for GikiCLEF -- III: Multilingual Information Filtering (INFILE) -- Information Filtering Evaluation: Overview of CLEF 2009 INFILE Track -- Batch Document Filtering Using Nearest Neighbor Algorithm -- UAIC: Participation in INFILE@CLEF Task -- Multilingual Information Filtering by Human Plausible Reasoning -- Hossur'Tech's Participation in CLEF 2009 INFILE Interactive Filtering -- Experiments with Google News for Filtering Newswire Articles -- IV: Intellectual Property (CLEF-IP) -- CLEF-IP 2009: Retrieval Experiments in the Intellectual Property Domain -- Exploring Structured Documents and Query Formulation Techniques for Patent Retrieval -- Formulating Good Queries for Prior Art Search -- UAIC: Participation in CLEF-IP Track -- PATATRAS: Retrieval Model Combination and Regression Models for Prior Art Search -- NLEL-MAAT at CLEF-IP -- Simple Pre and Post Processing Strategies for Patent Searching in CLEF IntellectualProperty Track 2009 -- Prior Art Search Using International Patent Classification Codes and All-Claims-Queries -- UTA and SICS at CLEF-IP'09 -- Searching CLEF-IP by Strategy -- UniNE at CLEF-IP 2009 -- Automatically Generating Queries for Prior Art Search -- Patent Retrieval Experiments in the Context of the CLEF IP Track 2009 -- Prior Art Retrieval Using the Claims Section as a Bag of Words -- UniGE Experiments on Prior Art Search in the Field of Patents -- V: Logfile Analysis (LogCLEF) -- LogCLEF 2009: The CLEF 2009 Multilingual Logfile Analysis Track Overview -- Identifying Common User Behaviour in Multilingual Search Logs -- A Search Engine Based on Query Logs, and Search Log Analysis by Automatic Language Identification -- Identifying Geographical Entities in Users' Queries -- Search Path Visualization and Session Performance Evaluation with Log Files -- User Logs as a Means to Enrich and Refine Translation Dictionaries -- VI: Grid Experiments (GRID@CLEF) -- CLEF 2009: Grid@CLEF Pilot Track Overview -- Decomposing Text Processing for Retrieval: Cheshire Tries GRID@CLEF -- Putting It All Together: The Xtrieval Framework at Grid@CLEF 2009 -- VII: Morphochallenge -- Overview and Results of Morpho Challenge 2009 -- MorphoNet: Exploring the Use of Community Structure for Unsupervised Morpheme Analysis -- Unsupervised Morpheme Analysis with Allomorfessor -- Unsupervised Morphological Analysis by Formal Analogy -- Unsupervised Word Decomposition with the Promodes Algorithm -- Unsupervised Morpheme Discovery with Ungrade -- Clustering Morphological Paradigms Using Syntactic Categories -- Simulating Morphological Analyzers with Stochastic Taggers for Confidence Estimation -- A Rule-Based Acquisition Model Adapted for Morphological Analysis -- Morphological Analysis by Multiple Sequence Alignment.

The tenth campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2009. There were eight main eval- tion tracks in CLEF 2009 plus a pilot task. The aim, as usual, was to test the perfo- ance of a wide range of multilingual information access (MLIA) systems or system components. This year, about 150 groups, mainly but not only from academia, reg- tered to participate in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia. The results were presented at a two-and-a-half day workshop held in Corfu, Greece, September 30 to October 2, 2009, in conjunction with the European Conference on Digital Libraries. The workshop, attended by 160 researchers and system developers, provided the opportunity for all the groups that had participated in the evaluation campaign to get together, compare approaches and exchange ideas.

There are no comments for this item.

Log in to your account to post a comment.