parl.Agent

class Agent(algorithm)[源代码]
alias: parl.Agent
alias: parl.core.paddle.agent.Agent
Agent is one of the three basic classes of PARL.
It is responsible for interacting with the environment and collecting

data for training the policy. | To implement a customized Agent, users can:

import parl

class MyAgent(parl.Agent):
    def __init__(self, algorithm, act_dim):
        super(MyAgent, self).__init__(algorithm)
        self.act_dim = act_dim
变量:

alg (parl.algorithm) – algorithm of this agent.

Public Functions:
  • sample: return a noisy action to perform exploration according to the policy.

  • predict: return an action given current observation.

  • learn: update the parameters of self.alg using the learn_program defined in build_program().

  • save: save parameters of the agent to a given path.

  • restore: restore previous saved parameters from a given path.

  • train: set the agent in training mode.

  • eval: set the agent in evaluation mode.

__init__(algorithm)[源代码]
参数:

algorithm (parl.Algorithm) – an instance of parl.Algorithm. This algorithm is then passed to self.alg.

eval()[源代码]

Sets the agent in evaluation mode.

learn(*args, **kwargs)[源代码]

The training interface for Agent.

predict(*args, **kwargs)[源代码]

Predict an action when given the observation of the environment.

restore(save_path, model=None)[源代码]

Restore previously saved parameters. This method requires a program that describes the network structure. The save_path argument is typically a value previously passed to save_params().

参数:
  • save_path (str) – path where parameters were previously saved.

  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.

抛出:

ValueError – if program is None and self.learn_program does not exist.

Example:

agent = AtariAgent()
agent.save('./model_dir')
agent.restore('./model_dir')
sample(*args, **kwargs)[源代码]

Return an action with noise when given the observation of the environment.

In general, this function is used in train process as noise is added to the action to preform exploration.

save(save_path, model=None)[源代码]

Save parameters.

参数:
  • save_path (str) – where to save the parameters.

  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.

Example:

agent = AtariAgent()
agent.save('./model_dir')
save_inference_model(save_path, input_shape_list, input_dtype_list, model=None)[源代码]

Saves input Layer or function as paddle.jit.TranslatedLayer format model, which can be used for inference.

参数:
  • save_path (str) – where to save the inference_model.

  • model (parl.Model) – model that describes the policy network structure. If None, will use self.alg.model.

  • input_shape_list (list) – shape of all inputs of the saved model’s forward method.

  • input_dtype_list (list) – dtype of all inputs of the saved model’s forward method.

Example:

agent = AtariAgent()
agent.save_inference_model('./inference_model_dir', [[None, 128]], ['float32'])

Example with actor-critic:

agent = AtariAgent()
agent.save_inference_model('./inference_ac_model_dir', [[None, 128]], ['float32'], agent.alg.model.actor_model)
train()[源代码]

Sets the agent in training mode, which is the default setting. Model of agent will be affected if it has some modules (e.g. Dropout, BatchNorm) that behave differently in train/evaluation mode.

Example:

agent.train()   # default setting
assert (agent.training is True)
agent.eval()
assert (agent.training is False)