<--- Back to Details
First PageDocument Content
Stochastic control / Control theory / Partially observable Markov decision process / Markov decision process / Reinforcement learning / Dialogue / FO / Usability / Statistics / Dynamic programming / Markov processes
Date: 2005-06-16 09:47:36
Stochastic control
Control theory
Partially observable Markov decision process
Markov decision process
Reinforcement learning
Dialogue
FO
Usability
Statistics
Dynamic programming
Markov processes

AAAI Proceedings Template

Add to Reading List

Source URL: mi.eng.cam.ac.uk

Download Document from Source Website

File Size: 235,46 KB

Share Document on Facebook

Similar Documents

Assisting Persons with Dementia during Handwashing Using a Partially Observable Markov Decision Process Jesse Hoey1 , Axel von Bertoldi2 , Pascal Poupart3 , and Alex Mihailidis2 1

DocID: 1uKbd - View Document

EE365: Markov Decision Problems Markov decision process Markov decision problem Examples

DocID: 1tQAG - View Document

Probability theory / Probability / Statistics / Robot / Algorithm / Expected value / Variance / Ethics / Markov decision process / Swarm behaviour

Mutual State-Based Capabilities for Role Assignment in Heterogeneous Teams Somchaya Liemhetcharat Manuela Veloso

DocID: 1rqnK - View Document

Artificial intelligence / Robotics / Dynamic programming / Markov processes / Stochastic control / Machine learning / Probability theory / Probability / Partially observable Markov decision process / Humanoid robot / Reinforcement learning / Humanrobot interaction

Online Development of Assistive Robot Behaviors for Collaborative Manipulation and Human-Robot Teamwork Bradley Hayes and Brian Scassellati Yale University Computer Science Department New Haven, CT 06511

DocID: 1rp22 - View Document

Artificial intelligence / Logic programming / Academia / Decision theory / Game theory / Non-cooperative games / Situation calculus / Nash equilibrium / Zero-sum game / Markov decision process / Strategy / Mathematical optimization

I N F S Y S R E S E A R C H R

DocID: 1riTl - View Document