Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Reinforcement Learning with TensorFlow
  • Toc
  • feedback
Reinforcement Learning with TensorFlow

Reinforcement Learning with TensorFlow

By : Dutta
2.2 (5)
close
Reinforcement Learning with TensorFlow

Reinforcement Learning with TensorFlow

2.2 (5)
By: Dutta

Overview of this book

Reinforcement learning (RL) allows you to develop smart, quick and self-learning systems in your business surroundings. It's an effective method for training learning agents and solving a variety of problems in Artificial Intelligence - from games, self-driving cars and robots, to enterprise applications such as data center energy saving (cooling data centers) and smart warehousing solutions. The book covers major advancements and successes achieved in deep reinforcement learning by synergizing deep neural network architectures with reinforcement learning. You'll also be introduced to the concept of reinforcement learning, its advantages and the reasons why it's gaining so much popularity. You'll explore MDPs, Monte Carlo tree searches, dynamic programming such as policy and value iteration, and temporal difference learning such as Q-learning and SARSA. You will use TensorFlow and OpenAI Gym to build simple neural network models that learn from their own actions. You will also see how reinforcement learning algorithms play a role in games, image processing and NLP. By the end of this book, you will have gained a firm understanding of what reinforcement learning is and understand how to put your knowledge to practical use by leveraging the power of TensorFlow and OpenAI Gym.
Table of Contents (17 chapters)
close

Data preparation


The trading experiment is tested in a cryptocurrency exchange called Poloniex. In order to test the current approach, m = 11 non-cash assets having the highest volume are pre-selected for the portfolio. Since the first base asset is cash, that is Bitcoin, the size of the portfolio is m+1 = 12. If we had tested in a market with larger volumes, such as foreign exchange market, there m would be as large as the total number of assets in the market.

Historical data of the assets is fed into a neural network, which outputs a portfolio weight vector. Input to a neural network at the end of period t is a tensor 

, of rank 3 with shape (f, n, m), where:

  • m is the number of pre-selected non-cash assets
  • n is the number of input periods before (here n = 50)
  • f=3 is the feature number

Since n = 50, that is, number of input periods is 50 and each period is of 30 minutes, the total time frame = 30*50 minutes = 1500 minutes = 25 hours. Features of the asset i on time period t are its closing...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete