Normal view MARC view ISBD view

Partially observed Markov decision processes : from filtering to controlled sensing / Vikram Krishnamurthy, University of British Columbia, Vancouver, Canada.

By: Krishnamurthy, V. (Vikram) [author.].
Material type: materialTypeLabelBookPublisher: Cambridge : Cambridge University Press, 2016Description: 1 online resource (xiii, 476 pages) : digital, PDF file(s).Content type: text Media type: computer Carrier type: online resourceISBN: 9781316471104 (ebook).Subject(s): Markov processes -- Textbooks | Stochastic processes -- TextbooksAdditional physical formats: Print version: : No titleDDC classification: 519.2/33 Online resources: Click here to access online Summary: Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?
    average rating: 0.0 (0 votes)
No physical items for this record

Title from publisher's bibliographic system (viewed on 05 Apr 2016).

Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?

There are no comments for this item.

Log in to your account to post a comment.