Synthesizing Natural Language to Visualization (NL2VIS) Benchmarks from NL2SQL Benchmarks

2021 
Natural language (NL) is a promising interaction paradigm for data visualization (VIS). However, there are not any NL to VIS (NL2VIS) benchmarks available. Our goal is to provide the first NL2VIS benchmark to enable and push the field of NL2VIS, especially with deep learning technologies. In this paper, we propose a NL2VIS synthesizer (NL2SQL-to-NL2VIS) that synthesizes NL2VIS benchmarks by piggybacking NL2SQL benchmarks. The intuition is based on the semantic connection between SQL queries and VIS queries: SQL queries specify what data is needed and VIS queries additionally need to specify how to visualize. However, different from SQL that has well-defined syntax, VIS languages (e.g., Vega-Lite, VizQL, ggplot2) are syntactically very different. To provide NL2VIS benchmarks that can support many VIS languages, we use a unified intermediate representation, abstract syntax trees (ASTs), for both SQL and VIS queries. We can synthesize multiple VIS trees through adding/deleting nodes to/from an SQL tree. Each VIS tree can then be converted to (any) VIS language. The NL for VIS will be modified based on the NL for SQL to reflect corresponding tree edits. We produce the first NL2VIS benchmark (nvBench), by applying NL2SQL-to-NL2VIS on a popular NL2SQL benchmark Spider, which covers 105 domains, supports seven common types of visualizations, and contains 25,750 (NL, VIS) pairs. Our method reduces the man-hour to 5.7% of developing a NL2VIS benchmark from scratch (or building a NL2VIS benchmark from scratch takes 17.5× man-hours of our method). Extensive human validation, through 23 experts and 312 crowd workers, demonstrates the high-quality of nvBench. In order to verify that nvBench can enable learning-based approaches, we develop a SEQ2VIS model. Our experimental results show that SEQ2VIS works well and significantly outperforms the state-of-the-art methods of the NL2VIS task.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    3
    Citations
    NaN
    KQI
    []