参考清华大学李升波老师的强化学习书籍《REINFORCEMENT LEARNING FOR DECISION-MAKING AND CONTROL》,提到In a narrow sense, RL is a goal-oriented sequential decision algorithm that learns from the trial-and-error interaction with the environment. Now, its connection with optimal control is well understood. Both of them seek to optimize (either maximize or minimize) a certain performance index while subjecting to a kind of representation of environment dynamics. The difference is that optimal control often requires an accurate model with some assumptions on formalism and determinism. In contrast, the trial-and-error learner requires to collect experience from the unknown environment. Despite a few successes, RL methods are still confronted with a variety of challenges when deploying it into practical problems, for example, exploration-exploitation dilemma, uncertainty and partial observability, temporally delayed reward, infeasibility from safety constraint, entangled stability and convergence, non-stationary environments, and lack of generality.

在这里插入图片描述

请添加图片描述
请添加图片描述

请添加图片描述
请添加图片描述
请添加图片描述
请添加图片描述


参考网址: 知乎-强化学习领域目前遇到的瓶颈是什么?

更多推荐