Zephyrnet Logo

Human Evaluation of Interpretability: The Case of AI-Generated Music Knowledge. (arXiv:2004.06894v1 [cs.HC])

Date:

[Submitted on 15 Apr 2020]

Download PDF

Abstract: Interpretability of machine learning models has gained more and more
attention among researchers in the artificial intelligence (AI) and
human-computer interaction (HCI) communities. Most existing work focuses on
decision making, whereas we consider knowledge discovery. In particular, we
focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
From a specific scenario, we present an experimental procedure to collect and
assess human-generated verbal interpretations of AI-generated music
theory/rules rendered as sophisticated symbolic/numeric objects. Our goal is to
reveal both the possibilities and the challenges in such a process of decoding
expressive messages from AI sources. We treat this as a first step towards 1)
better design of AI representations that are human interpretable and 2) a
general methodology to evaluate interpretability of AI-discovered knowledge
representations.

Submission history

From: Haizi Yu [view email]
[v1]
Wed, 15 Apr 2020 06:03:34 UTC (859 KB)

Source: http://arxiv.org/abs/2004.06894

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?