tensortrade.env.default package¶
-
tensortrade.env.default.
create
(portfolio: tensortrade.oms.wallets.portfolio.Portfolio, action_scheme: Union[tensortrade.env.default.actions.TensorTradeActionScheme, str], reward_scheme: Union[tensortrade.env.default.rewards.TensorTradeRewardScheme, str], feed: tensortrade.feed.core.feed.DataFeed, window_size: int = 1, min_periods: int = None, random_start_pct: float = 0.0, **kwargs) → tensortrade.env.generic.environment.TradingEnv[source]¶ Creates the default TradingEnv of the project to be used in training RL agents.
Parameters: - portfolio (Portfolio) – The portfolio to be used by the environment.
- action_scheme (actions.TensorTradeActionScheme or str) – The action scheme for computing actions at every step of an episode.
- reward_scheme (rewards.TensorTradeRewardScheme or str) – The reward scheme for computing rewards at every step of an episode.
- feed (DataFeed) – The feed for generating observations to be used in the look back window.
- window_size (int) – The size of the look back window to use for the observation space.
- min_periods (int, optional) – The minimum number of steps to warm up the feed.
- random_start_pct (float, optional) – Whether to randomize the starting point within the environment at each observer reset, starting in the first X percentage of the sample
- **kwargs (keyword arguments) – Extra keyword arguments needed to build the environment.
Returns: TradingEnv – The default trading environment.