OneFi: One-Shot Recognition for Unseen Gesture via COTS WiFi

2021 
WiFi-based Human Gesture Recognition (HGR) becomes increasingly promising for device-free human-computer interaction. However, existing WiFi-based approaches have not been ready for real-world deployment due to the limited scalability, especially for unseen gestures. The reason behind is that when introducing unseen gestures, prior works have to collect a large number of samples and re-train the model. While the recent advance of few-shot learning has brought new opportunities to solve this problem, the overhead has not been effectively reduced. This is because these methods still require enormous data to learn adequate prior knowledge, and their complicated training process intensifies the regular training cost. In this paper, we propose a WiFi-based HGR system, namely OneFi, which can recognize unseen gestures with only one (or few) labeled samples. OneFi fundamentally addresses the challenge of high overhead. On the one hand, OneFi utilizes a virtual gesture generation mechanism such that the massive efforts in prior works can be significantly alleviated in the data collection process. On the other hand, OneFi employs a lightweight one-shot learning framework based on transductive fine-tuning to eliminate model re-training. We additionally design a self-attention based backbone, termed as WiFi Transformer, to minimize the training cost of the proposed framework. We establish a real-world testbed using commodity WiFi devices and perform extensive experiments over it. The evaluation results show that OneFi can recognize unseen gestures with the accuracy of 84.2, 94.2, 95.8, and 98.8% when 1, 3, 5, 7 labeled samples are available, respectively, while the overall training process takes less than two minutes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    0
    Citations
    NaN
    KQI
    []