Gesture Annotation With a Visual Search Engine for Multimodal Communication Research
A machine learning system developed to automatically annotate a large database of television program videos as part of the Distributed Little Red Hen Lab project to show its effectiveness at aiding gesture scholars in their work is described.
TL;DR
AI KEY POINTS
ABSTRACT
PAPER
A machine learning system developed to automatically annotate a large database of television program videos as part of the Distributed Little Red Hen Lab project to show its effectiveness at aiding gesture scholars in their work is described.
Research is provided by Semantic Scholar and AI-generated text may at times produce inaccurate results.
Information provided on this site does not constitute legal, financial, medical, or any other professional advice.
DATA LICENSING
Search and article data is provided under CC BY-NC or ODC-BY and via The Semantic Scholar Open Data Platform. Read more at Kinney, Rodney Michael et al. “The Semantic Scholar Open Data Platform.” ArXiv abs/2301.10140 (2023): n. pag.