parl.Agent

class Agent(algorithm)[source]
alias: parl.Agent
alias: parl.core.paddle.agent.Agent
Agent is one of the three basic classes of PARL.
It is responsible for interacting with the environment and collecting

data for training the policy. | To implement a customized Agent, users can:

import parl

class MyAgent(parl.Agent):
    def __init__(self, algorithm, act_dim):
        super(MyAgent, self).__init__(algorithm)
        self.act_dim = act_dim
Variables:alg (parl.algorithm) – algorithm of this agent.
Public Functions:
  • sample: return a noisy action to perform exploration according to the policy.
  • predict: return an action given current observation.
  • learn: update the parameters of self.alg using the learn_program defined in build_program().
  • save: save parameters of the agent to a given path.
  • restore: restore previous saved parameters from a given path.
  • train: set the agent in training mode.
  • eval: set the agent in evaluation mode.
__init__(algorithm)[source]
Parameters:algorithm (parl.Algorithm) – an instance of parl.Algorithm. This algorithm is then passed to self.alg.
eval()[source]

Sets the agent in evaluation mode.

learn(*args, **kwargs)[source]

The training interface for Agent.

predict(*args, **kwargs)[source]

Predict an action when given the observation of the environment.

restore(save_path, model=None)[source]

Restore previously saved parameters. This method requires a program that describes the network structure. The save_path argument is typically a value previously passed to save_params().

Parameters:
  • save_path (str) – path where parameters were previously saved.
  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.
Raises:

ValueError – if program is None and self.learn_program does not exist.

Example:

agent = AtariAgent()
agent.save('./model_dir')
agent.restore('./model_dir')
sample(*args, **kwargs)[source]

Return an action with noise when given the observation of the environment.

In general, this function is used in train process as noise is added to the action to preform exploration.

save(save_path, model=None)[source]

Save parameters.

Parameters:
  • save_path (str) – where to save the parameters.
  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.

Example:

agent = AtariAgent()
agent.save('./model_dir')
save_inference_model(save_path, input_shape_list, input_dtype_list, model=None)[source]

Saves input Layer or function as paddle.jit.TranslatedLayer format model, which can be used for inference.

Parameters:
  • save_path (str) – where to save the inference_model.
  • model (parl.Model) – model that describes the policy network structure. If None, will use self.alg.model.
  • input_shape_list (list) – shape of all inputs of the saved model’s forward method.
  • input_dtype_list (list) – dtype of all inputs of the saved model’s forward method.

Example:

agent = AtariAgent()
agent.save_inference_model('./inference_model_dir', [[None, 128]], ['float32'])

Example with actor-critic:

agent = AtariAgent()
agent.save_inference_model('./inference_ac_model_dir', [[None, 128]], ['float32'], agent.alg.model.actor_model)
train()[source]

Sets the agent in training mode, which is the default setting. Model of agent will be affected if it has some modules (e.g. Dropout, BatchNorm) that behave differently in train/evaluation mode.

Example:

agent.train()   # default setting
assert (agent.training is True)
agent.eval()
assert (agent.training is False)