Abstract:
The problem of optimally controlling a large, stochastic, dynamical system has challenged control system designers for years, particularly when communication resources be...Show MoreMetadata
Abstract:
The problem of optimally controlling a large, stochastic, dynamical system has challenged control system designers for years, particularly when communication resources between ports of the system are quite scarce. Optimal control methods are known to lead to analytically intractable control laws due to the possibility of signaling through the system. A new formulation of the distributed control problem is presented which avoids such behavior by restricting the scope of each decision agent's knowledge of the underlying system dynamics. Within this framework techniques for solving the individual agents' problems can be developed. These techniques support coordination strategies as discussed in a companion paper.
Published in: IEEE Transactions on Systems, Man, and Cybernetics ( Volume: 11, Issue: 8, August 1981)