Handbook of learning and approximate dynamic programming / (Record no. 59568)

000 -LEADER
fixed length control field 06528nam a2201489 i 4500
001 - CONTROL NUMBER
control field 5273582
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20200421114116.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 151221s2004 njua ob 001 eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
ISBN 9780470544785
-- electronic
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
-- print
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
-- electronic
082 04 - CLASSIFICATION NUMBER
Call Number 519.7/03
245 00 - TITLE STATEMENT
Title Handbook of learning and approximate dynamic programming /
300 ## - PHYSICAL DESCRIPTION
Number of Pages 1 PDF (xxi, 644 pages) :
490 1# - SERIES STATEMENT
Series statement IEEE press series on computational intelligence ;
505 0# - FORMATTED CONTENTS NOTE
Remark 2 Foreword. -- 1. ADP: goals, opportunities and principles. -- Part I: Overview. -- 2. Reinforcement learning and its relationship to supervised learning. -- 3. Model-based adaptive critic designs. -- 4. Guidance in the use of adaptive critics for control. -- 5. Direct neural dynamic programming. -- 6. The linear programming approach to approximate dynamic programming. -- 7. Reinforcement learning in large, high-dimensional state spaces. -- 8. Hierarchical decision making. -- Part II: Technical advances. -- 9. Improved temporal difference methods with linear function approximation. -- 10. Approximate dynamic programming for high-dimensional resource allocation problems. -- 11. Hierarchical approaches to concurrency, multiagency, and partial observability. -- 12. Learning and optimization - from a system theoretic perspective. -- 13. Robust reinforcement learning using integral-quadratic constraints. -- 14. Supervised actor-critic reinforcement learning. -- 15. BPTT and DAC - a common framework for comparison. -- Part III: Applications. -- 16. Near-optimal control via reinforcement learning. -- 17. Multiobjective control problems by reinforcement learning. -- 18. Adaptive critic based neural network for control-constrained agile missile. -- 19. Applications of approximate dynamic programming in power systems control. -- 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. -- 21. Helicopter flight control using direct neural dynamic programming. -- 22. Toward dynamic stochastic optimal power flow. -- 23. Control, optimization, security, and self-healing of benchmark power systems.
520 ## - SUMMARY, ETC.
Summary, etc . A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code. Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book. Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented. The contributors are leading researchers in the field.
700 1# - AUTHOR 2
Author 2 Si, Jennie.
856 42 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier http://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=5273582
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type eBooks
264 #1 -
-- Hoboken, New Jersey :
-- IEEE Press,
-- c2004.
264 #2 -
-- [Piscataqay, New Jersey] :
-- IEEE Xplore,
-- [2004]
336 ## -
-- text
-- rdacontent
337 ## -
-- electronic
-- isbdmedia
338 ## -
-- online resource
-- rdacarrier
588 ## -
-- Description based on PDF viewed 12/21/2015.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Dynamic programming.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Automatic programming (Computer science)
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Machine learning.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Control theory.
650 #0 - SUBJECT ADDED ENTRY--SUBJECT 1
-- Systems engineering.
695 ## -
-- Adaptation model
695 ## -
-- Aerospace control
695 ## -
-- Aerospace electronics
695 ## -
-- Algorithm design and analysis
695 ## -
-- Analytical models
695 ## -
-- Approximation algorithms
695 ## -
-- Approximation methods
695 ## -
-- Argon
695 ## -
-- Artificial neural networks
695 ## -
-- Atmospheric modeling
695 ## -
-- Automatic test pattern generation
695 ## -
-- Benchmark testing
695 ## -
-- Books
695 ## -
-- Cities and towns
695 ## -
-- Coils
695 ## -
-- Communities
695 ## -
-- Concurrent computing
695 ## -
-- Conferences
695 ## -
-- Control systems
695 ## -
-- Convergence
695 ## -
-- Data structures
695 ## -
-- Decision making
695 ## -
-- Driver circuits
695 ## -
-- Dynamic programming
695 ## -
-- Dynamic scheduling
695 ## -
-- Eigenvalues and eigenfunctions
695 ## -
-- Equations
695 ## -
-- Estimation
695 ## -
-- Focusing
695 ## -
-- Function approximation
695 ## -
-- Fuzzy control
695 ## -
-- Generators
695 ## -
-- Helicopters
695 ## -
-- Heuristic algorithms
695 ## -
-- Hidden Markov models
695 ## -
-- History
695 ## -
-- Humans
695 ## -
-- Indexes
695 ## -
-- Learning
695 ## -
-- Learning systems
695 ## -
-- Linear programming
695 ## -
-- Load flow
695 ## -
-- Loss measurement
695 ## -
-- Machine learning
695 ## -
-- Machine learning algorithms
695 ## -
-- Markov processes
695 ## -
-- Mathematical model
695 ## -
-- Measurement
695 ## -
-- Missiles
695 ## -
-- Optimal control
695 ## -
-- Optimization
695 ## -
-- Power system dynamics
695 ## -
-- Power system stability
695 ## -
-- Process control
695 ## -
-- Programming
695 ## -
-- Proposals
695 ## -
-- Propulsion
695 ## -
-- Recurrent neural networks
695 ## -
-- Resource management
695 ## -
-- Roads
695 ## -
-- Robots
695 ## -
-- Robust control
695 ## -
-- Robustness
695 ## -
-- Rotors
695 ## -
-- Sections
695 ## -
-- Security
695 ## -
-- Sensitivity
695 ## -
-- Stability analysis
695 ## -
-- Stability criteria
695 ## -
-- State estimation
695 ## -
-- Steady-state
695 ## -
-- Stochastic systems
695 ## -
-- Supervised learning
695 ## -
-- Training
695 ## -
-- Trajectory
695 ## -
-- Uncertainty
695 ## -
-- Vectors
695 ## -
-- Water heating

No items available.