Bültmann & Gerriets
Controlled Diffusion Processes
von N. V. Krylov
Übersetzung: A. B. Aries
Verlag: Springer New York
Reihe: Stochastic Modelling and Applied Probability Nr. 14
Gebundene Ausgabe
ISBN: 978-0-387-90461-0
Auflage: 1980
Erschienen am 12.11.1980
Sprache: Englisch
Format: 241 mm [H] x 160 mm [B] x 23 mm [T]
Gewicht: 653 Gramm
Umfang: 324 Seiten

Preis: 160,49 €
keine Versandkosten (Inland)


Dieser Titel wird erst bei Bestellung gedruckt. Eintreffen bei uns daher ca. am 16. November.

Der Versand innerhalb der Stadt erfolgt in Regel am gleichen Tag.
Der Versand nach außerhalb dauert mit Post/DHL meistens 1-2 Tage.

160,49 €
merken
andere Ausgabe 160,49 €
klimaneutral
Der Verlag produziert nach eigener Angabe noch nicht klimaneutral bzw. kompensiert die CO2-Emissionen aus der Produktion nicht. Daher übernehmen wir diese Kompensation durch finanzielle Förderung entsprechender Projekte. Mehr Details finden Sie in unserer Klimabilanz.
Klappentext
Inhaltsverzeichnis

Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.



1 Introduction to the Theory of Controlled Diffusion Processes.- 1. The Statement of Problems-Bellman's Principle-Bellman's Equation.- 2. Examples of the Bellman Equations-The Normed Bellman Equation.- 3. Application of Optimal Control Theory-Techniques for Obtaining Some Estimates.- 4. One-Dimensional Controlled Processes.- 5. Optimal Stopping of a One-Dimensional Controlled Process.- Notes.- 2 Auxiliary Propositions.- 1. Notation and Definitions.- 2. Estimates of the Distribution of a Stochastic Integral in a Bounded Region.- 3. Estimates of the Distribution of a Stochastic Integral in the Whole Space.- 4. Limit Behavior of Some Functions.- 5. Solutions of Stochastic Integral Equations and Estimates of the Moments.- 6. Existence of a Solution of a Stochastic Equation with Measurable Coefficients.- 7. Some Properties of a Random Process Depending on a Parameter.- 8. The Dependence of Solutions of a Stochastic Equation on a Parameter.- 9. The Markov Property of Solutions of Stochastic Equations.- 10. Ito's Formula with Generalized Derivatives.- Notes.- 3 General Properties of a Payoff Function.- 1. Basic Results.- 2. Some Preliminary Considerations.- 3. The Proof of Theorems 1.5-1.7.- 4. The Proof of Theorems 1.8-1.11 for the Optimal Stopping Problem.- Notes.- 4 The Bellman Equation.- 1. Estimation of First Derivatives of Payoff Functions.- 2. Estimation from Below of Second Derivatives of a Payoff Function.- 3. Estimation from Above of Second Derivatives of a Payoff Function.- 4. Estimation of a Derivative of a Payoff Function with Respect to t.- 5. Passage to the Limit in the Bellman Equation.- 6. The Approximation of Degenerate Controlled Processes by Nondegenerate Ones.- 7. The Bellman Equation.- Notes.- 5 The Construction of ?-OptimalStrategies.- 1. ?-Optimal Markov Strategies and the Bellman Equation.- 2. ?-Optimal Markov Strategies. The Bellman Equation in the Presence of Degeneracy.- 3. The Payoff Function and Solution of the Bellman Equation: The Uniqueness of the Solution of the Bellman Equation.- Notes.- 6 Controlled Processes with Unbounded Coefficients: The Normed Bellman Equation.- 1. Generalization of the Results Obtained in Section 3.1.- 2. General Methods for Estimating Derivatives of Payoff Functions.- 3. The Normed Bellman Equation.- 4. The Optimal Stopping of a Controlled Process on an Infinite Interval of Time.- 5. Control on an Infinite Interval of Time.- Notes.- Appendices.- 1. Some Properties of Stochastic Integrals.- 2. Some Properties of Submartingales.


andere Formate
weitere Titel der Reihe