Zephyrnet Logo

K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations. (arXiv:2005.04120v1 [cs.HC])

Date:

[Submitted on 8 May 2020]

Download PDF

Abstract: Recognizing emotions during social interactions has many potential
applications with the popularization of low-cost mobile sensors, but a
challenge remains with the lack of naturalistic affective interaction data.
Most existing emotion datasets do not support studying idiosyncratic emotions
arising in the wild as they were collected in constrained environments.
Therefore, studying emotions in the context of social interactions requires a
novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive
annotations of continuous emotions during naturalistic conversations. The
dataset contains multimodal measurements, including audiovisual recordings,
EEG, and peripheral physiological signals, acquired with off-the-shelf devices
from 16 sessions of approximately 10-minute long paired debates on a social
issue. Distinct from previous datasets, it includes emotion annotations from
all three available perspectives: self, debate partner, and external observers.
Raters annotated emotional displays at intervals of every 5 seconds while
viewing the debate footage, in terms of arousal-valence and 18 additional
categorical emotions. The resulting K-EmoCon is the first publicly available
emotion dataset accommodating the multiperspective assessment of emotions
during social interactions.

Submission history

From: Cheul Young Park [view email]
[v1]
Fri, 8 May 2020 15:51:12 UTC (5,220 KB)

Source: http://arxiv.org/abs/2005.04120

spot_img

Latest Intelligence

spot_img