parl.Agent

class Agent(algorithm, gpu_id=None)[source]
alias: parl.Agent
alias: parl.core.fluid.agent.Agent
Agent is one of the three basic classes of PARL.
It is responsible for interacting with the environment and collecting data for training the policy.
To implement a customized Agent, users can:
import parl

class MyAgent(parl.Agent):
    def __init__(self, algorithm, act_dim):
        super(MyAgent, self).__init__(algorithm)
        self.act_dim = act_dim

This class will initialize the neural network parameters automatically, and provides an executor for users to run the programs (self.fluid_executor).

Variables:
  • gpu_id (int) – deprecated. specify which GPU to be used. -1 if to use the CPU.
  • fluid_executor (fluid.Executor) – executor for running programs of the agent.
  • alg (parl.algorithm) – algorithm of this agent.
Public Functions:
  • build_program (abstract function): build various programs for the agent to interact with outer environment.
  • get_weights: return a Python dictionary containing all the parameters of self.alg.
  • set_weights: copy parameters from set_weights() to this agent.
  • sample: return a noisy action to perform exploration according to the policy.
  • predict: return an action given current observation.
  • learn: update the parameters of self.alg using the learn_program defined in build_program().
  • save: save parameters of the agent to a given path.
  • restore: restore previous saved parameters from a given path.
__init__(algorithm, gpu_id=None)[source]

Build programs by calling the method self.build_program() and run initialization function of fluid.default_startup_program().

Parameters:
  • algorithm (parl.Algorithm) – an instance of parl.Algorithm. This algorithm is then passed to self.alg.
  • gpu_id (int) – deprecated. specify which GPU to be used. -1 if to use the CPU.
build_program()[source]

Build various programs here with the learn, predict, sample functions of the algorithm.

Note

Users must implement this function in an Agent.
This function will be called automatically in the initialization function.
To build a program, you must do the following:
  1. Create a fluid program with fluid.program_guard();
  2. Define data layers for feeding the data;
  3. Build various programs(e.g., learn_program, predict_program) with data layers defined in step b.

Example:

self.pred_program = fluid.Program()

with fluid.program_guard(self.pred_program):
    obs = layers.data(
        name='obs', shape=[self.obs_dim], dtype='float32')
    self.act_prob = self.alg.predict(obs)
get_params()[source]

Returns a Python dictionary containing the whole parameters of self.alg.

Deprecated since version 1.2: This will be removed in 1.3, please use get_weights instead.

Returns:a Python List containing the parameters of self.alg.
learn(*args, **kwargs)[source]

The training interface for Agent. This function feeds the training data into the learn_program defined in build_program().

predict(*args, **kwargs)[source]

Predict an action when given the observation of the environment.

This function feeds the observation into the prediction program defined in build_program(). It is often used in the evaluation stage.

restore(save_path, program=None)[source]

Restore previously saved parameters. This method requires a program that describes the network structure. The save_path argument is typically a value previously passed to save_params().

Parameters:
  • save_path (str) – path where parameters were previously saved.
  • program (fluid.Program) – program that describes the neural network structure. If None, will use self.learn_program.
Raises:

ValueError – if program is None and self.learn_program does not exist.

Example:

agent = AtariAgent()
agent.save('./model.ckpt')
agent.restore('./model.ckpt')
sample(*args, **kwargs)[source]

Return an action with noise when given the observation of the environment.

In general, this function is used in train process as noise is added to the action to preform exploration.

save(save_path, program=None)[source]

Save parameters.

Parameters:
  • save_path (str) – where to save the parameters.
  • program (fluid.Program) – program that describes the neural network structure. If None, will use self.learn_program.
Raises:

ValueError – if program is None and self.learn_program does not exist.

Example:

agent = AtariAgent()
agent.save('./model.ckpt')
set_params(params)[source]

Copy parameters from get_params() into this agent.

Deprecated since version 1.2: This will be removed in 1.3, please use set_weights instead.

Parameters:params (dict) – a Python List containing the parameters of self.alg.