The Conference for Machine Learning Innovation

Human / AI Interaction Loop Training as a new Approach for interactive Learning with Reinforcement-Learning Agents

Keynote
Join the ML Revolution!
Register until January 23:
✓Raspberry Pi or C64 Mini for free
✓Save up to $310
✓ Group Discount
Register Now
Join the ML Revolution!
Register until January 23:
✓Raspberry Pi or C64 Mini for free
✓Save up to $310
✓ Group Discount
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now
Infos
Wednesday, December 11 2019
13:30 - 14:00
Room:
Saal A+B

Human / AI interaction loop training as a new approach for interactive learning with reinforcement-learning: Reinforcement-Learning (RL) in various decision-making tasks of Machine-Learning (ML) provides effective results with an agent learning from a stand-alone reward function. However, it presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards. This complexity, coming from high dimensionality and continuousness of the environments considered herein, calls for a large number of learning trials to learn about the environment through RL. Imitation-Learning (IL) offers a promising solution for those challenges, using a teacher’s feedback. In IL, the learning process can take advantage of human-sourced assistance and/or control over the agent and environment. In this study, we considered a human teacher, and an agent learner. The teacher takes part in the agent’s training towards dealing with the environment, tackling a specific objective, and achieving a predefined goal. Within that paradigm, however, existing IL approaches have the drawback of expecting extensive demonstration information in long-horizon problems. With this work, we propose a novel approach combining IL with different types of RL methods, namely State-action-reward-state-action (SARSA) and Proximal Policy Optimization (PPO), to take advantage of both IL and RL methods. We address how to effectively leverage the teacher’s feedback – be it direct binary or indirect detailed – for the agent learner to learn sequential decision-making policies. The results of this study on various OpenAI-Gym environments show that this algorithmic method can be incorporated with different RL-IL combinations at different respective levels, leading to significant reductions in both teacher effort and exploration costs.

This Session belongs to the Diese Session gehört zum Programm vom MunichMunich and  und BerlinBerlin program. Take me to the program of . Hier geht es zum Programm von Singapore Singapore .

Take me to the full program of Zum vollständigen Programm von Munich Munich .

Take me to the full program of Zum vollständigen Programm von Berlin Berlin .

This Session Diese Session belongs to the gehört zum Programm von MunichMunich and  und BerlinBerlin program. Take me to the current program of . Hier geht es zum aktuellen Programm von Singapore Singapore , Munich Munich or oder Berlin Berlin .

Behind the Tracks