Foundations of Reinforcement Learning with Applications in Finance
- Length: 522 pages
- Edition: 1
- Language: English
- Publisher: Chapman and Hall/CRC
- Publication Date: 2022-12-13
- ISBN-10: 1032124121
- ISBN-13: 9781032124124
- Sales Rank: #5531662 (See Top 100 Books)
Foundations of Reinforcement Learning with Applications in Finance
aims to demystify Reinforcement Learning, and to make it a practically useful tool for those studying and working in applied areas — especially finance.
Reinforcement Learning is emerging as a viable and powerful technique for solving a variety of complex problems across industries that involve Sequential Optimal Decisioning under Uncertainty. Its penetration in high-profile problems like self-driving cars, robotics, and strategy games points to a future where Reinforcement Learning algorithms will have decisioning abilities far superior to humans. But when it comes getting educated in this area, there seems to be a reluctance to jump right in, because Reinforcement Learning appears to have acquired a reputation for being mysterious and exotic. Even technical people will often claim that the subject involves advanced math and complicated engineering, erecting a psychological barrier to entry against otherwise interested students.
This book seeks to overcome that barrier, and to introduce the foundations of Reinforcement Learning in a way that balances depth of understanding with clear, minimally technical delivery.
Features
Focus on the foundational theory underpinning Reinforcement Learning Suitable as a primary text for courses in Reinforcement Learning, but also as supplementary reading for applied/financial mathematics, programming, and other related courses Suitable for a professional audience of quantitative analysts or industry specialists Blends theory/mathematics, programming/algorithms and real-world financial nuances while always striving to maintain simplicity and to build intuitive understanding.
Cover Page Half-Title Page Title Page Copyright Page Contents Preface Author Biographies Summary of Notation Chapter 1 ◾ Overview 1.1 Learning Reinforcement Learning 1.2 What You Will Learn from This Book 1.3 Expected Background to Read This Book 1.4 Decluttering the Jargon Linked to Reinforcement Learning 1.5 Introduction to the Markov Decision Process (MDP) framework 1.6 Real-World Problems That Fit the MDP Framework 1.7 The Inherent Difficulty in Solving MDPs 1.8 Value Function, Bellman Equations, Dynamic Programming and RL 1.9 Outline of Chapters 1.9.1 Module I: Processes and Planning Algorithms 1.9.2 Module II: Modeling Financial Applications 1.9.3 Module III: Reinforcement Learning Algorithms 1.9.4 Module IV: Finishing Touches 1.9.5 Short Appendix Chapters Chapter 2 ◾ Programming and Design 2.1 Code Design 2.2 Environment Setup 2.3 Classes and Interfaces 2.3.1 A Distribution Interface 2.3.2 A Concrete Distribution 2.3.2.1 Dataclasses 2.3.2.2 Immutability 2.3.3 Checking Types 2.3.3.1 Static Typing 2.3.4 Type Variables 2.3.5 Functionality 2.4 Abstracting over Computation 2.4.1 First-Class Functions 2.4.1.1 Lambdas 2.4.2 Iterative Algorithms 2.4.2.1 Iterators and Generators 2.5 Key Takeaways from This Chapter Module I Processes and Planning Algorithms Chapter 3 ◾ Markov Processes 3.1 The Concept of State in a Process 3.2 Understanding Markov Property from Stock Price Examples 3.3 Formal Definitions for Markov Processes 3.3.1 Starting States 3.3.2 Terminal States 3.3.3 Markov Process Implementation 3.4 Stock Price Examples Modeled as Markov Processes 3.5 Finite Markov Processes 3.6 Simple Inventory Example 3.7 Stationary Distribution of a Markov Process 3.8 Formalism of Markov Reward Processes 3.9 Simple Inventory Example as a Markov Reward Process 3.10 Finite Markov Reward Processes 3.11 Simple Inventory Example as a Finite Markov Reward Process 3.12 Value Function of a Markov Reward Process 3.13 Summary of Key Learnings from This Chapter Chapter 4 ◾ Markov Decision Processes 4.1 Simple Inventory Example: How Much to Order? 4.2 The Difficulty of Sequential Decisioning Under Uncertainty 4.3 Formal Definition of a Markov Decision Process 4.4 Policy 4.5 [Markov Decision Process, Policy]: = Markov Reward Process 4.6 Simple Inventory Example with Unlimited Capacity (Infinite State/Action Space) 4.7 Finite Markov Decision Processes 4.8 Simple Inventory Example as a Finite Markov Decision Process 4.9 MDP Value Function for a Fixed Policy 4.10 Optimal Value Function and Optimal Policies 4.11 Variants and Extensions of MDPs 4.11.1 Size of Spaces and Discrete versus Continuous 4.11.1.1 State Space 4.11.1.2 Action Space 4.11.1.3 Time Steps 4.11.2 Partially-Observable Markov Decision Processes (POMDPs) 4.12 Summary of Key Learnings from This Chapter Chapter 5 ◾ Dynamic Programming Algorithms 5.1 Planning versus Learning 5.2 Usage of the Term Dynamic Programming 5.3 Fixed-Point Theory 5.4 Bellman Policy Operator and Policy Evaluation Algorithm 5.5 Greedy Policy 5.6 Policy Improvement 5.7 Policy Iteration Algorithm 5.8 Bellman Optimality Operator and Value Iteration Algorithm 5.9 Optimal Policy from Optimal Value Function 5.10 Revisiting the Simple Inventory Example 5.11 Generalized Policy Iteration 5.12 Asynchronous Dynamic Programming 5.13 Finite-Horizon Dynamic Programming: Backward Induction 5.14 Dynamic Pricing for End-of-Life/End-of-Season of a Product 5.15 Generalization to Non-Tabular Algorithms 5.16 Summary of Key Learnings from This Chapter Chapter 6 ◾ Function Approximation and Approximate Dynamic Programming 6.1 Function Approximation 6.2 Linear Function Approximation 6.3 Neural Network Function Approximation 6.4 Tabular as a Form of FunctionApprox 6.5 Approximate Policy Evaluation 6.6 Approximate Value Iteration 6.7 Finite-Horizon Approximate Policy Evaluation 6.8 Finite-Horizon Approximate Value Iteration 6.9 Finite-Horizon Approximate Q-Value Iteration 6.10 How to Construct the Non-Terminal States Distribution 6.11 Key Takeaways from This Chapter Module II Modeling Financial Applications Chapter 7 ◾ Utility Theory 7.1 Introduction to the Concept of Utility 7.2 A Simple Financial Example 7.3 The Shape of the Utility function 7.4 Calculating the Risk-Premium 7.5 Constant Absolute Risk-Aversion (CARA) 7.6 A Portfolio Application of CARA 7.7 Constant Relative Risk-Aversion (CRRA) 7.8 A Portfolio Application of CRRA 7.9 Key Takeaways from This Chapter Chapter 8 ◾ Dynamic Asset-Allocation and Consumption 8.1 Optimization of Personal Finance 8.2 Merton's Portfolio Problem and Solution 8.3 Developing Intuition for the Solution to Merton's Portfolio Problem 8.4 A Discrete-Time Asset-Allocation Example 8.5 Porting to Real-World 8.6 Key Takeaways from This Chapter Chapter 9 ◾ Derivatives Pricing and Hedging 9.1 A Brief Introduction to Derivatives 9.1.1 Forwards 9.1.2 European Options 9.1.3 American Options 9.2 Notation for the Single-Period Simple Setting 9.3 Portfolios, Arbitrage and Risk-Neutral Probability Measure 9.4 First Fundamental Theorem of Asset Pricing (1st FTAP) 9.5 Second Fundamental Theorem of Asset Pricing (2nd FTAP) 9.6 Derivatives Pricing in Single-Period Setting 9.6.1 Derivatives Pricing When Market Is Complete 9.6.2 Derivatives Pricing When Market Is Incomplete 9.6.3 Derivatives Pricing When Market Has Arbitrage 9.7 Derivatives Pricing in Multi-Period/Continuous-Time 9.7.1 Multi-Period Complete-Market Setting 9.7.2 Continuous-Time Complete-Market Setting 9.8 Optimal Exercise of American Options Cast as a Finite MDP 9.9 Generalizing to Optimal-Stopping Problems 9.10 Pricing/Hedging in an Incomplete Market Cast as an MDP 9.11 Key Takeaways from This Chapter Chapter 10 ◾ Order-Book Trading Algorithms 10.1 Basics of Order Book and Price Impact 10.2 Optimal Execution of a Market Order 10.2.1 Simple Linear Price Impact Model with No Risk-Aversion 10.2.2 Paper by Bertsimas and Lo on Optimal Order Execution 10.2.3 Incorporating Risk-Aversion and Real-World Considerations 10.3 Optimal Market-Making 10.3.1 Avellaneda-Stoikov Continuous-Time Formulation 10.3.2 Solving the Avellaneda-Stoikov Formulation 10.3.3 Analytical Approximation to the Solution to Avellaneda-Stoikov Formulation 10.3.4 Real-World Market-Making 10.4 Key Takeaways from This Chapter Module III Reinforcement Learning Algorithms Chapter 11 ◾ Monte-Carlo and Temporal-Difference for Prediction 11.1 Overview of the Reinforcement Learning Approach 11.2 RL for Prediction 11.3 Monte-Carlo (MC) Prediction 11.4 Temporal-Difference (TD) Prediction 11.5 TD versus MC 11.5.1 TD Learning Akin to Human Learning 11.5.2 Bias, Variance and Convergence 11.5.3 Fixed-Data Experience Replay on TD versus MC 11.5.4 Bootstrapping and Experiencing 11.6 TD(λ) Prediction 11.6.1 n-Step Bootstrapping Prediction Algorithm 11.6.2 λ-Return Prediction Algorithm 11.6.3 Eligibility Traces 11.6.4 Implementation of the TD(λ) Prediction Algorithm 11.7 Key Takeaways from This Chapter Chapter 12 ◾ Monte-Carlo and Temporal-Difference for Control 12.1 Refresher on Generalized Policy Iteration (GPI) 12.2 GPI with Evaluation as Monte-Carlo 12.3 GLIE Monte-Control Control 12.4 SARSA 12.5 SARSA(λ) 12.6 Off-Policy Control 12.6.1 Q-Learning 12.6.2 Windy Grid 12.6.3 Importance Sampling 12.7 Conceptual Linkage between DP and TD Algorithms 12.8 Convergence of RL Algorithms 12.9 Key Takeaways from This Chapter Chapter 13 ◾ Batch RL, Experience-Replay, DQN, LSPI, Gradient TD 13.1 Batch RL and Experience-Replay 13.2 A Generic Implementation of Experience-Replay 13.3 Least-Squares RL Prediction 13.3.1 Least-Squares Monte-Carlo (LSMC) 13.3.2 Least-Squares Temporal-Difference (LSTD) 13.3.3 LSTD(λ) 13.3.4 Convergence of Least-Squares Prediction 13.4 Q-Learning with Experience-Replay 13.4.1 Deep Q-Networks (DQN) Algorithm 13.5 Least-Squares Policy Iteration (LSPI) 13.5.1 Saving Your Village from a Vampire 13.5.2 Least-Squares Control Convergence 13.6 RL for Optimal Exercise of American Options 13.6.1 LSPI for American Options Pricing 13.6.2 Deep Q-Learning for American Options Pricing 13.7 Value Function Geometry 13.7.1 Notation and Definitions 13.7.2 Bellman Policy Operator and Projection Operator 13.7.3 Vectors of Interest in the Φ Subspace 13.8 Gradient Temporal-Difference (Gradient TD) 13.9 Key Takeaways from This Chapter Chapter 14 ◾ Policy Gradient Algorithms 14.1 Advantages and Disadvantages of Policy Gradient Algorithms 14.2 Policy Gradient Theorem 14.2.1 Notation and Definitions 14.2.2 Statement of the Policy Gradient Theorem 14.2.3 Proof of the Policy Gradient Theorem 14.3 Score Function for Canonical Policy Functions 14.3.1 Canonical π(s,a;θ) for Finite Action Spaces 14.3.2 Canonical π(s,a;θ) for Single-Dimensional Continuous Action Spaces 14.4 REINFORCE Algorithm (Monte-Carlo Policy Gradient) 14.5 Optimal Asset Allocation (Revisited) 14.6 Actor-Critic and Variance Reduction 14.7 Overcoming Bias with Compatible Function Approximation 14.8 Policy Gradient Methods in Practice 14.8.1 Natural Policy Gradient 14.8.2 Deterministic Policy Gradient 14.9 Evolutionary Strategies 14.10 Key Takeaways from This Chapter Module IV Finishing Touches Chapter 15 ◾ Multi-Armed Bandits: Exploration versus Exploitation 15.1 Introduction to the Multi-Armed Bandit Problem 15.1.1 Some Examples of Explore-Exploit Dilemma 15.1.2 Problem Definition 15.1.3 Regret 15.1.4 Counts and Gaps 15.2 Simple Algorithms 15.2.1 Greedy and ϵ-Greedy 15.2.2 Optimistic Initialization 15.2.3 Decaying ϵt-Greedy Algorithm 15.3 Lower Bound 15.4 Upper Confidence Bound Algorithms 15.4.1 Hoeffding's Inequality 15.4.2 UCB1 Algorithm 15.4.3 Bayesian UCB 15.5 Probability Matching 15.5.1 Thompson Sampling 15.6 Gradient Bandits 15.7 Horse Race 15.8 Information State Space MDP 15.9 Extending to Contextual Bandits and RL Control 15.10 Key Takeaways from This Chapter Chapter 16 ◾ Blending Learning and Planning 16.1 Planning versus Learning 16.1.1 Planning the Solution of Prediction/Control 16.1.2 Learning the Solution of Prediction/Control 16.1.3 Advantages and Disadvantages of Planning versus Learning 16.1.4 Blending Planning and Learning 16.2 Decision-Time Planning 16.3 Monte-Carlo Tree-Search (MCTS) 16.4 Adaptive Multi-Stage Sampling 16.5 Summary of Key Learnings from This Chapter Chapter 17 ◾ Summary and Real-World Considerations 17.1 Summary of Key Learnings from This Book 17.2 RL in the Real-World Appendix A ◾ Moment Generating Function and Its Applications A.1 The Moment Generating Function (MGF) A.2 MGF for Linear Functions of Random Variables A.3 MGF for the Normal Distribution A.4 Minimizing the MGF A.4.1 Minimizing the MGF When x Follows a Normal Distribution A.4.2 Minimizing the MGF When x Follows a Symmetric Binary Distribution Appendix B ◾ Portfolio Theory B.1 Setting and Notation B.2 Portfolio Returns B.3 Derivation of Efficient Frontier Curve B.4 Global Minimum Variance Portfolio (GMVP) B.5 Orthogonal Efficient Portfolios B.6 Two-Fund Theorem B.7 An Example of the Efficient Frontier for 16 Assets B.8 CAPM: Linearity of Covariance Vector w.r.t. Mean Returns B.9 Useful Corollaries of CAPM B.10 Cross-Sectional Variance B.11 Efficient Set with a Risk-Free Asset Appendix C ◾ Introduction to and Overview of Stochastic Calculus Basics C.1 Simple Random Walk C.2 Brownian Motion as Scaled Random Walk C.3 Continuous-Time Stochastic Processes C.4 Properties of Brownian Motion Sample Traces C.5 Ito Integral C.6 Ito's Lemma C.7 A Lognormal Process C.8 A Mean-Reverting Process Appendix D ◾ The Hamilton-Jacobi-Bellman (HJB) Equation D.1 HJB as a Continuous-Time Version of Bellman Optimality Equation D.2 HJB with State Transitions as an Ito Process Appendix E ◾ Black-Scholes Equation and Its Solution for Call/Put Options E.1 Assumptions E.2 Derivation of the Black-Scholes Equation E.3 Solution of the Black-Scholes Equation for Call/Put Options Appendix F ◾ Function Approximations as Affine Spaces F.1 Vector Space F.2 Function Space F.3 Linear Map of Vector Spaces F.4 Affine Space F.5 Affine Map F.6 Function Approximations F.6.1 D[ℝ] as an Affine Space P F.6.2 Delegator Space R F.7 Stochastic Gradient Descent F.8 SGD Update for Linear Function Approximations Appendix G ◾ Conjugate Priors for Gaussian and Bernoulli Distributions G.1 Conjugate Prior for Gaussian Distribution G.2 Conjugate Prior for Bernoulli Distribution Bibliography Index
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.