Montezuma's Revenge
28 papers with code • 1 benchmarks • 1 datasets
Montezuma's Revenge is an ATARI 2600 Benchmark game that is known to be difficult to perform on for reinforcement learning algorithms. Solutions typically employ algorithms that incentivise environment exploration in different ways.
For the state-of-the art tables, please consult the parent Atari Games task.
( Image credit: Q-map )
Most implemented papers
Rainbow: Combining Improvements in Deep Reinforcement Learning
The deep reinforcement learning community has made several independent improvements to the DQN algorithm.
Exploration by Random Network Distillation
In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods.
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms.
Go-Explore: a New Approach for Hard-Exploration Problems
Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge.
Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks
Being able to reach any desired location in the environment can be a valuable asset for an agent.
Exploring Unknown States with Action Balance
In this paper, we focus on improving the effectiveness of finding unknown states and propose action balance exploration, which balances the frequency of selecting each action at a given state and can be treated as an extension of upper confidence bound (UCB) to deep reinforcement learning.
First return, then explore
The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only.
Reinforcement Learning with Latent Flow
Temporal information is essential to learning effective policies with Reinforcement Learning (RL).
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
This results in an algorithm which sets a new state of the art across 16 tasks from the MiniHack suite used in prior work, and also performs robustly on Habitat and Montezuma's Revenge.
Unifying Count-Based Exploration and Intrinsic Motivation
We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations.