Evaluating Learning Language Representations

2015 
Machine learning offers significant benefits for systems that process and understand natural language: a lower maintenance and upkeep costs than when using manually-constructed resources, b easier portability to new domains, tasks, or languages, and c robust and timely adaptation to situation-specific settings. However, the behaviour of an adaptive system is less predictable than when using an edited, stable resource, which makes quality control a continuous issue. This paper proposes an evaluation benchmark for measuring the quality, coverage, and stability of a natural language system as it learns word meaning. Inspired by existing tests for human vocabulary learning, we outline measures for the quality of semantic word representations, such as when learning word embeddings or other distributed representations. These measures highlight differences between the types of underlying learning processes as systems ingest progressively more data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    3
    Citations
    NaN
    KQI
    []