Combining Opinion Pooling and Evidential Updating for Multi-Agent Consensus

Combining Opinion Pooling and Evidential Updating for Multi-Agent Consensus

Chanelle Lee, Jonathan Lawry, Alan Winfield

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

The evidence available to a multi-agent system can take at least two distinct forms. There can be direct evidence from the environment resulting, for example, from sensor measurements or from running tests or experiments. In addition, agents also gain evidence from other individuals in the population with whom they are interacting. We, therefore, envisage an agent's beliefs as a probability distribution over a set of hypotheses of interest, which are updated either on the basis of direct evidence using Bayesian updating, or by taking account of the probabilities of other agents using opinion pooling. This paper investigates the relationship between these two processes in a multi-agent setting. We consider a possible Bayesian interpretation of probability pooling and then explore properties for pooling operators governing the extent to which direct evidence is diluted, preserved or amplified by the pooling process. We then use simulation experiments to show that pooling operators can provide a mechanism by which a limited amount of direct evidence can be efficiently propagated through a population of agents so that an appropriate consensus is reached. In particular, we explore the convergence properties of a parameterised family of operators with a range of evidence propagation strengths.
Keywords:
Agent-based and Multi-agent Systems: Agent-Based Simulation and Emergence
Robotics: Multi-Robot Systems