tensortrade.agents.a2c_agent module

References

class tensortrade.agents.a2c_agent.A2CAgent(env: TradingEnvironment, shared_network: <sphinx.ext.autodoc.importer._MockObject object at 0x7f4ae38ca4d0> = None, actor_network: <sphinx.ext.autodoc.importer._MockObject object at 0x7f4ae38ca6d0> = None, critic_network: <sphinx.ext.autodoc.importer._MockObject object at 0x7f4ae38cab50> = None)[source]

Bases: tensortrade.agents.agent.Agent

get_action(state: numpy.ndarray, **kwargs) → int[source]

Get an action for a specific state in the environment.

restore(path: str, **kwargs)[source]

Restore the agent from the file specified in path.

save(path: str, **kwargs)[source]

Save the agent to the directory specified in path.

train(n_steps: int = None, n_episodes: int = None, save_every: int = None, save_path: str = None, callback: callable = None, **kwargs) → float[source]

Train the agent in the environment and return the mean reward.

class tensortrade.agents.a2c_agent.A2CTransition(state, action, reward, done, value)

Bases: tuple

action

Alias for field number 1

done

Alias for field number 3

reward

Alias for field number 2

state

Alias for field number 0

value

Alias for field number 4