-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Deep Reinforcement Learning Hands-On
By :

As we have lots of code that is supposed to work with OpenAI Gym, we’ll implement the trading functionality following Gym’s Env
class API, which should be familiar to you. Our environment is implemented in the StocksEnv
class in the Chapter08/lib/environ.py
module. It uses several internal classes to keep its state and encode observations. Let’s first look at the public API class.
class Actions(enum.Enum): Skip = 0 Buy = 1 Close = 2
We encode all available actions as an enumerator’s fields. We support a very simple set of actions with only three options: do nothing, buy a single share, and close the existing position.
class StocksEnv(gym.Env): metadata = {‘render.modes’: [‘human’]}
This metadata field is required the for gym.Env
compatibility. We don’t provide render functionality, so you can ignore this.
@classmethod def from_dir(cls, data_dir, **kwargs): prices ...