QMIX

class QMIX(agent_model, qmixer_model, double_q=True, gamma=0.99, lr=0.0005, clip_grad_norm=None)[source]

Bases: parl.core.paddle.algorithm.Algorithm

__init__(agent_model, qmixer_model, double_q=True, gamma=0.99, lr=0.0005, clip_grad_norm=None)[source]

QMIX algorithm

Parameters:
  • agent_model (parl.Model) – agents’ local q network for decision making.
  • qmixer_model (parl.Model) – A mixing network which takes local q values as input to construct a global Q network.
  • double_q (bool) – Double-DQN.
  • gamma (float) – discounted factor for reward computation.
  • lr (float) – learning rate.
  • clip_grad_norm (None, or float) – clipped value of gradients’ global norm.
learn(state_batch, actions_batch, reward_batch, terminated_batch, obs_batch, available_actions_batch, filled_batch)[source]
Parameters:
  • state_batch (paddle.Tensor) – (batch_size, T, state_shape)
  • actions_batch (paddle.Tensor) – (batch_size, T, n_agents)
  • reward_batch (paddle.Tensor) – (batch_size, T, 1)
  • terminated_batch (paddle.Tensor) – (batch_size, T, 1)
  • obs_batch (paddle.Tensor) – (batch_size, T, n_agents, obs_shape)
  • available_actions_batch (paddle.Tensor) – (batch_size, T, n_agents, n_actions)
  • filled_batch (paddle.Tensor) – (batch_size, T, 1)
Returns:

train loss td_error (float): train TD error

Return type:

loss (float)