Time-segmentation- and Position-free Recognition from Video of Air-drawn Gestures and Characters
2014
We report on the recognition from a video of isolated alphabetic characters and connected cursive characters, such as Hiragana or Kanji characters, drawn in the air.
This topic involves a number of difficult problems in computer vision such as the segmentation and recognition of complex motion from a video. We utilize an algorithm called time-space continuous dynamic programming (TSCDP) that can realize both time and location-free (spotting) recognition.
Spotting means that prior segmentation of the input video is not required.
Each of the reference (model) characters used is represented by a single stroke composed of pixels.
We conducted two experiments involving the recognition of 26 isolated alphabetic characters and 23 Japanese Hiragana and Kanji air-drawn characters. Moreover we conducted gesture recognition based on TSCDP and showed that TSCDP was free from many restrictions required for conventional methods.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI