Gesture Annotation With a Visual Search Engine for Multimodal Communication Research

A machine learning system developed to automatically annotate a large database of television program videos as part of the Distributed Little Red Hen Lab project to show its effectiveness at aiding gesture scholars in their work is described.

Fri Apr 27 2018
Citations
9
by Sergiy Turchyn, I. Moreno and others
CHAT WITH RESEARCH


QUESTIONS & ANSWERS

Log in to generate
TL;DR
AI KEY POINTS
ABSTRACT
PAPER
A machine learning system developed to automatically annotate a large database of television program videos as part of the Distributed Little Red Hen Lab project to show its effectiveness at aiding gesture scholars in their work is described.


Research is provided by Semantic Scholar and AI-generated text may at times produce inaccurate results.
Information provided on this site does not constitute legal, financial, medical, or any other professional advice.

DATA LICENSING
Search and article data is provided under CC BY-NC or ODC-BY and via The Semantic Scholar Open Data Platform. Read more at Kinney, Rodney Michael et al. “The Semantic Scholar Open Data Platform.” ArXiv abs/2301.10140 (2023): n. pag.