Connect with us

Ai

Too many cooks: Coordinating multi-agent collaboration through inverse planning. (arXiv:2003.11778v1 [cs.AI])

Published

on

(Submitted on 26 Mar 2020)

Abstract: Collaboration requires agents to coordinate their behavior on the fly,
sometimes cooperating to solve a single task together and other times dividing
it up into sub-tasks to work on in parallel. Underlying the human ability to
collaborate is theory-of-mind, the ability to infer the hidden mental states
that drive others to act. Here, we develop Bayesian Delegation, a decentralized
multi-agent learning mechanism with these abilities. Bayesian Delegation
enables agents to rapidly infer the hidden intentions of others by inverse
planning. These inferences enable agents to flexibly decide in the absence of
communication when to cooperate on the same sub-task and when to work on
different sub-tasks in parallel. We test this model in a suite of multi-agent
Markov decision processes inspired by cooking problems. To succeed, agents must
coordinate both their high-level plans (e.g., what sub-task they should work
on) and their low-level actions (e.g., avoiding collisions). Bayesian
Delegation bridges these two levels and rapidly aligns agents’ beliefs about
who should work on what without any communication. When agents cooperate on the
same sub-task, coordinated plans emerge that enable the group of agents to
achieve tasks no agent can complete on their own. Our model outperforms
lesioned agents without Bayesian Delegation or without the ability to cooperate
on the same sub-task.

Submission history

From: Max Kleiman-Weiner [view email]
[v1]
Thu, 26 Mar 2020 07:43:13 UTC (1,709 KB)

Source: http://arxiv.org/abs/2003.11778

Continue Reading