Measuring the Progress of AI Research

This pilot project collects problems and metrics/datasets from the AI research literature, and tracks progress on them.

You can use this Notebook to see how things are progressing in specific subfields or AI/ML as a whole, as a place to report new results you've obtained, as a place to look for problems that might benefit from having new datasets/metrics designed for them, or as a source to build on for data science projects.

At EFF, we're ultimately most interested in how this data can influence our understanding of the likely implications of AI. To begin with, we're focused on gathering it.

Original authors: Peter Eckersley and Yomna Nasser at EFF. Contact: ai-metrics@eff.org.

With contributions from: Yann Bayle, Owain Evans, Gennie Gebhart and Dustin Schwenk.

Inspired by and merging data from:

Thanks to many others for valuable conversations, suggestions and corrections, including: Dario Amodei, James Bradbury, Miles Brundage, Mark Burdett, Breandan Considine, Owen Cotton-Barrett, Marc Bellemare, Will Dabny, Eric Drexler, Otavio Good, Katja Grace, Hado van Hasselt, Anselm Levskaya, Clare Lyle, Toby Ord, Michael Page, Maithra Raghu, Anders Sandberg, Laura Schatzkin, Daisy Stanton, Gabriel Synnaeve, Stacey Svetlichnaya, Helen Toner, and Jason Weston. EFF's work on this project has been supported by the Open Philanthropy Project.

Taxonomy

It collates data with the following structure:

problem 
    \   \
     \   metrics  -  measures 
      \
       - subproblems
            \
          metrics
             \
            measure[ment]s

Problems describe the ability to learn an important category of task.

Metrics should ideally be formulated in the form "software is able to learn to do X given training data of type Y". In some cases X is the interesting part, but sometimes also Y.

Measurements are the score that a specific instance of a specific algorithm was able to get on a Metric.

problems are tagged with attributes: eg, vision, abstract-games, language, world-modelling, safety

Some of these are about performance relative to humans (which is of course a very arbitrary standard, but one we're familiar with)

  • agi -- most capable humans can do this, so AGIs can do this (note it's conceivable that an agent might pass the Turing test before all of these are won)
  • super -- the very best humans can do this, or human organisations can do this
  • verysuper -- neither humans nor human orgs can presently do this

problems can have "subproblems", including simpler cases and preconditions for solving the problem in general

a "metric" is one way of measuring progress on a problem, commonly associated with a test dataset. There will often be several metrics for a given problem, but in some cases we'll start out with zero metrics and will need to start proposing some...

a measure[ment] is a score on a given metric, by a particular codebase/team/project, at a particular time

The present state of the actual taxonomy is at the bottom of this notebook.

Source Code

  • Code implementing the taxonomy of Problems and subproblems, Metrics and Measurements is defined in a free-standing Python file, taxonomy.py. scales.py contains definitions of various unit systems used by Metrics.
  • Most source data is now defined in a series of separate files by topic:

    • data/vision.py for hand-entered computer vision data
    • data/language.py for hand-entered and merged language data
    • data/strategy_games.py for data on abstract strategy games
    • data/video_games.py a combination of hand-entered and scraped Atari data (other video game data can also go here)
    • data/stem.py for data on scientific & technical problems

    • data imported from specific scrapers (and then subsequently edited):

    • For now, some of the Problems and Metrics are still defined in this Notebook, especially in areas that do not have many active results yet.
  • Scrapers for specific data sources:
    • scrapers/awty.py for importing data from Rodriguo Benenson's Are We There Yey? site
    • scrapers/es.py for processing a pasted table of data from the Evolutionary Strategies Atari paper (is probably a useful model for other Atari papers).
In [1]:
from IPython.display import HTML
HTML('''
<script>
    if (typeof code_show == "undefined") {
        code_show=false;
    } else {
        code_show = !code_show; // FIXME hack, because we toggle on load :/
    }
    function toggle_one(mouse_event) {
        console.log("Unhiding "+button + document.getElementById(button.region));
        parent = button.parentNode;
        console.log("Parent" + parent)
        input = parent.querySelector(".input");
        console.log("Input" + input + " " + input.classList + " " + input.style.display)
        input.style.display = "block";
        //$(input).show();
    }
    function code_toggle() {
        if (!code_show) {
            inputs = $('div.input');
            for (n = 0; n < inputs.length; n++) {
                if (inputs[n].innerHTML.match('# hidd' + 'encode'))
                    inputs[n].style.display = "none";
                    button = document.createElement("button");
                    button.innerHTML="unhide code";
                    button.style.width = "100px";
                    button.style.marginLeft = "90px";
                    button.addEventListener("click", toggle_one);
                    button.classList.add("cell-specific-unhide")
                    // inputs[n].parentNode.appendChild(button);
            }
        } else { 
            $('div.input').show();
            $('button.cell-specific-unhide').remove()
        } 
        code_show = !code_show;
    } 
    
    $( document ).ready(code_toggle);
    
</script>
<form action="javascript:code_toggle()">
    <input type="submit" value="Click here to show/hide source code cells."> <br><br>(you can mark a cell as code with <tt># hiddencode</tt>)
</form>
''')
Out[1]:


(you can mark a cell as code with # hiddencode)
In [2]:
# hiddencode
from __future__ import print_function

%matplotlib inline  
import matplotlib as mpl
try:
    from lxml.cssselect import CSSSelector
except ImportError:
    # terrifying magic for Azure Notebooks
    import os
    if os.getcwd() == "/home/nbuser":
        !pip install cssselect
        from lxml.cssselect import CSSSelector
    else:
        raise

import datetime
import json
import re

from matplotlib import pyplot as plt

date = datetime.date

import taxonomy
#reload(taxonomy)
from taxonomy import Problem, Metric, problems, metrics, measurements, all_attributes, offline, render_tables
from scales import *

Problems, Metrics, and Datasets

Vision


(Imagenet example data)

The simplest vision subproblem is probably image classification, which determines what objects are present in a picture. From 2010-2017, Imagenet has been a closely watched contest for progress in this domain.

Image classification includes not only recognising single things within an image, but localising them and essentially specifying which pixels are which object. MSRC-21 is a metric that is specifically for that task:


(MSRC 21 example data)
In [3]:
from data.vision import *
imagenet.graph()
In [4]:
from data.vision import *
from data.awty import *
In [5]:
for m in sorted(image_classification.metrics, key=lambda m:m.name): 
    if m != imagenet: m.graph()

AWTY, not yet imported:

Handling 'Pascal VOC 2011 comp3' detection_datasets_results.html#50617363616c20564f43203230313120636f6d7033
Skipping 40.6 mAP Fisher and VLAD with FLAIR CVPR 2014
Handling 'Leeds Sport Poses' pose_estimation_datasets_results.html#4c656564732053706f727420506f736573
69.2 %                  Strong Appearance and Expressive Spatial Models for Human Pose Estimation  ICCV 2013
64.3 %                                    Appearance sharing for collective human pose estimation  ACCV 2012
63.3 %                                                   Poselet conditioned pictorial structures  CVPR 2013
60.8 %                                Articulated pose estimation with flexible mixtures-of-parts  CVPR 2011
 55.6%           Pictorial structures revisited: People detection and articulated pose estimation  CVPR 2009
Handling 'Pascal VOC 2007 comp3' detection_datasets_results.html#50617363616c20564f43203230303720636f6d7033
Skipping 22.7 mAP Ensemble of Exemplar-SVMs for Object Detection and Beyond ICCV 2011
Skipping 27.4 mAP Measuring the objectness of image windows PAMI 2012
Skipping 28.7 mAP Automatic discovery of meaningful object parts with latent CRFs CVPR 2010
Skipping 29.0 mAP Object Detection with Discriminatively Trained Part Based Models PAMI 2010
Skipping 29.6 mAP Latent Hierarchical Structural Learning for Object Detection CVPR 2010
Skipping 32.4 mAP Deformable Part Models with Individual Part Scaling BMVC 2013
Skipping 34.3 mAP Histograms of Sparse Codes for Object Detection CVPR 2013
Skipping 34.3 mAP Boosted local structured HOG-LBP for object localization CVPR 2011
Skipping 34.7 mAP Discriminatively Trained And-Or Tree Models for Object Detection CVPR 2013
Skipping 34.7 mAP Incorporating Structural Alternatives and Sharing into Hierarchy for Multiclass Object Recognition and Detection CVPR 2013
Skipping 34.8 mAP Color Attributes for Object Detection CVPR 2012
Skipping 35.4 mAP Object Detection with Discriminatively Trained Part Based Models PAMI 2010
Skipping 36.0 mAP Machine Learning Methods for Visual Object Detection archives-ouvertes 2011
Skipping 38.7 mAP Detection Evolution with Multi-Order Contextual Co-occurrence CVPR 2013
Skipping 40.5 mAP Segmentation Driven Object Detection with Fisher Vectors ICCV 2013
Skipping 41.7 mAP Regionlets for Generic Object Detection ICCV 2013
Skipping 43.7 mAP Beyond Bounding-Boxes: Learning Object Shape by Model-Driven Grouping ECCV 2012
Handling 'Pascal VOC 2007 comp4' detection_datasets_results.html#50617363616c20564f43203230303720636f6d7034
Skipping 59.2 mAP Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition ECCV 2014
Skipping 58.5 mAP Rich feature hierarchies for accurate object detection and semantic segmentation CVPR 2014
Skipping 29.0 mAP Multi-Component Models for Object Detection ECCV 2012
Handling 'Pascal VOC 2010 comp3' detection_datasets_results.html#50617363616c20564f43203230313020636f6d7033
Skipping 24.98 mAP Learning Collections of Part Models for Object Recognition CVPR 2013
Skipping 29.4 mAP Discriminatively Trained And-Or Tree Models for Object Detection CVPR 2013
Skipping 33.4 mAP Object Detection with Discriminatively Trained Part Based Models PAMI 2010
Skipping 34.1 mAP Segmentation as selective search for object recognition ICCV 2011
Skipping 35.1 mAP Selective Search for Object Recognition IJCV 2013
Skipping 36.0 mAP Latent Hierarchical Structural Learning for Object  Detection CVPR 2010
Skipping 36.8 mAP Object Detection by Context and Boosted HOG-LBP ECCV 2010
Skipping 38.4 mAP Segmentation Driven Object Detection with Fisher Vectors ICCV 2013
Skipping 39.7 mAP Regionlets for Generic Object Detection ICCV 2013
Skipping 40.4 mAP Fisher and VLAD with FLAIR CVPR 2014
Handling 'Pascal VOC 2010 comp4' detection_datasets_results.html#50617363616c20564f43203230313020636f6d7034
Skipping 53.7 mAP Rich feature hierarchies for accurate object detection and semantic segmentation CVPR 2014
Skipping 40.4 mAP Bottom-up Segmentation for Top-down Detection CVPR 2013
Skipping 33.1 mAP Multi-Component Models for Object Detection ECCV 2012
In [6]:
from IPython.display import HTML
HTML(image_classification.tables())
Out[6]:
CIFAR-10 Image Recognition
DateAlgorithm% correctPaper / Source
2011-07-01An Analysis of Single-Layer Networks in Unsupervised Feature Learning 79.6 An Analysis of Single-Layer Networks in Unsupervised Feature Learning
2011-07-01Hierarchical Kernel Descriptors80.0 Object Recognition with Hierarchical Kernel Descriptors
2012-06-16MCDNN88.79 Multi-Column Deep Neural Networks for Image Classification
2012-06-26Local Transformations82.2 Learning Invariant Representations with Local Transformations
2012-07-03Improving neural networks by preventing co-adaptation of feature detectors84.4 Improving neural networks by preventing co-adaptation of feature detectors
2012-12-03Learning with Recursive Perceptual Representations79.7 Learning with Recursive Perceptual Representations
2012-12-03Discriminative Learning of Sum-Product Networks83.96 Discriminative Learning of Sum-Product Networks
2012-12-03DCNN89.0 ImageNet Classification with Deep Convolutional Neural Networks
2012-12-03GP EI90.5 Practical Bayesian Optimization of Machine Learning Algorithms
2013-01-16Stochastic Pooling84.87 Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
2013-06-16Maxout Networks90.65 Maxout Networks
2013-06-16DropConnect90.68 Regularization of Neural Networks using DropConnect
2013-07-01Smooth Pooling Regions80.02 Learning Smooth Pooling Regions for Visual Recognition
2014-04-14DNN+Probabilistic Maxout90.61 Improving Deep Neural Networks with Probabilistic Maxout Units
2014-04-14NiN91.2 Network In Network
2014-06-21PCANet78.67 PCANet: A Simple Deep Learning Baseline for Image Classification?
2014-06-21Nonnegativity Constraints 82.9 Stable and Efficient Representation Learning with Nonnegativity Constraints
2014-07-01DSN91.78 Deeply-Supervised Nets
2014-08-28CKN82.18 Convolutional Kernel Networks
2014-09-22SSCNN93.72 Spatially-sparse convolutional neural networks
2014-12-08Discriminative Unsupervised Feature Learning with Convolutional Neural Networks82.0 Discriminative Unsupervised Feature Learning with Convolutional Neural Networks
2014-12-08Deep Networks with Internal Selective Attention through Feedback Connections90.78 Deep Networks with Internal Selective Attention through Feedback Connections
2015-02-13An Analysis of Unsupervised Pre-training in Light of Recent Advances86.7 An Analysis of Unsupervised Pre-training in Light of Recent Advances
2015-02-15ACN95.59 Striving for Simplicity: The All Convolutional Net
2015-02-19NiN+APL92.49 Learning Activation Functions to Improve Deep Neural Networks
2015-02-28Fractional MP96.53 Fractional Max-Pooling
2015-05-02Tuned CNN93.63 Scalable Bayesian Optimization Using Deep Neural Networks
2015-05-13APAC89.67 APAC: Augmented PAttern Classification with Neural Networks
2015-05-31FLSCNN75.86 Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network
2015-06-08RCNN-9692.91 Recurrent Convolutional Neural Network for Object Recognition
2015-06-12ReNet87.65 ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks
2015-07-01ELC91.19 Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves
2015-07-01MLR DNN91.88 Multi-Loss Regularized Deep Neural Network
2015-07-01cifar.torch92.45 cifar.torch
2015-07-12DCNN+GFE89.14 Deep Convolutional Neural Networks as Generic Feature Extractors
2015-08-16RReLU88.8 Empirical Evaluation of Rectified Activations in Convolution Network
2015-09-17MIM91.48 ± 0.2On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units
2015-10-05Tree+Max-Avg pooling93.95 Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
2015-10-11SWWAE92.23 Stacked What-Where Auto-encoders
2015-11-09BNM NiN93.25 Batch-normalized Maxout Network in Network
2015-11-18CMsC93.13 Competitive Multi-scale Convolution
2015-12-07Spectral Representations for Convolutional Neural Networks91.4 Spectral Representations for Convolutional Neural Networks
2015-12-07BinaryConnect91.73 BinaryConnect: Training Deep Neural Networks with binary weights during propagations
2015-12-07VDN92.4 Training Very Deep Networks
2015-12-10DRL93.57 Deep Residual Learning for Image Recognition
2016-01-04Fitnet4-LSUV94.16 All you need is a good init
2016-01-07Exponential Linear Units93.45 Fast and Accurate Deep Network Learning by Exponential Linear Units
2016-05-15Universum Prescription93.34 Universum Prescription: Regularization using Unlabeled Data
2016-05-20ResNet-100195.38 ± 0.2Identity Mappings in Deep Residual Networks
2016-07-10ResNet+ELU94.38 Deep Residual Networks with Exponential Linear Unit
2017-02-15Neural Architecture Search96.35 Neural Architecture Search with Reinforcement Learning
2017-04-22Evolution94.6 Large-Scale Evolution of Image Classifiers
2017-04-22Evolution ensemble95.6 Large-Scale Evolution of Image Classifiers
2017-05-30Deep Complex94.4 Deep Complex Networks
2017-07-16RL+NT94.6 Reinforcement Learning for Architecture Search by Network Transformation
CIFAR-100 Image Recognition
DateAlgorithm% correctPaper / Source
2012-06-16Receptive Field Learning54.23 Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features
2013-01-16Stochastic Pooling57.49 Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
2013-06-16Maxout Networks61.43 Maxout Networks
2013-07-01Smooth Pooling Regions56.29 Smooth Pooling Regions
2013-07-01Tree Priors63.15 Discriminative Transfer Learning with Tree-based Priors
2014-04-14DNN+Probabilistic Maxout61.86 Improving Deep Neural Networks with Probabilistic Maxout Units
2014-04-14NiN64.32 Network in Network
2014-06-21Stable and Efficient Representation Learning with Nonnegativity Constraints 60.8 Stable and Efficient Representation Learning with Nonnegativity Constraints
2014-07-01DSN65.43 Deeply-Supervised Nets
2014-09-22SSCNN75.7 Spatially-sparse convolutional neural networks
2014-12-08Deep Networks with Internal Selective Attention through Feedback Connections66.22 Deep Networks with Internal Selective Attention through Feedback Connections
2015-02-15ACN66.29 Striving for Simplicity: The All Convolutional Net
2015-02-19NiN+APL69.17 Learning Activation Functions to Improve Deep Neural Networks
2015-02-28Fractional MP73.61 Fractional Max-Pooling
2015-05-02Tuned CNN72.6 Scalable Bayesian Optimization Using Deep Neural Networks
2015-06-08RCNN-9668.25 Recurrent Convolutional Neural Network for Object Recognition
2015-07-01Deep Representation Learning with Target Coding64.77 Deep Representation Learning with Target Coding
2015-07-01HD-CNN67.38 HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition
2015-07-01MLR DNN68.53 Multi-Loss Regularized Deep Neural Network
2015-07-12DCNN+GFE67.68 Deep Convolutional Neural Networks as Generic Feature Extractors
2015-08-16RReLU59.75 Empirical Evaluation of Rectified Activations in Convolution Network
2015-09-17MIM70.8 ± 0.2On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units
2015-10-05Tree+Max-Avg pooling67.63 Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
2015-10-11SWWAE69.12 Stacked What-Where Auto-encoders
2015-11-09BNM NiN71.14 Batch-normalized Maxout Network in Network
2015-11-18CMsC72.44 Competitive Multi-scale Convolution
2015-12-07VDN67.76 Training Very Deep Networks
2015-12-07Spectral Representations for Convolutional Neural Networks68.4 Spectral Representations for Convolutional Neural Networks
2016-01-04Fitnet4-LSUV72.34 All you need is a good init
2016-01-07Exponential Linear Units75.72 Fast and Accurate Deep Network Learning by Exponential Linear Units
2016-05-15Universum Prescription67.16 Universum Prescription: Regularization using Unlabeled Data
2016-05-20ResNet-100177.29 ± 0.22Identity Mappings in Deep Residual Networks
2016-07-10ResNet+ELU73.45 Deep Residual Networks with Exponential Linear Unit
2017-04-22Evolution77.0 Large-Scale Evolution of Image Classifiers
2017-05-30Deep Complex72.91 Deep Complex Networks
2017-06-06NiN+Superclass+CDJ69.0 Deep Convolutional Decision Jungle for Image Classification
Imagenet Image Recognition
DateAlgorithmErrorPaper / Source
2010-08-31NEC UIUC0.28191 ImageNet Large Scale Visual Recognition Competition 2010 (ILSVRC2010)
2011-10-26XRCE0.2577 ImageNet Large Scale Visual Recognition Competition 2011 (ILSVRC2011)
2012-10-13AlexNet / SuperVision0.16422 ImageNet Large Scale Visual Recognition Competition 2012 (ILSVRC2012) (algorithm from ImageNet Classification with Deep Convolutional Neural Networks)
2013-11-14Clarifai0.11743 ImageNet Large Scale Visual Recognition Competition 2013 (ILSVRC2013)
2014-08-18VGG0.07405 ImageNet Large Scale Visual Recognition Competition 2014 (ILSVRC2014)
2015-04-10withdrawn0.0458 Deep Image: Scaling up Image Recognition
2015-12-10MSRA0.03567 ILSVRC2015 Results
2016-09-26Trimps-Soushen0.02991 ILSVRC2016 Results
2017-07-21SE-ResNet152 / WMW0.02251 ILSVRC2017 Results
MNIST handwritten digit recognition
DateAlgorithm% errorPaper / Source
2002-07-01ISVM0.56 Training Invariant Support Vector Machines
2002-07-01Shape contexts0.63 Shape matching and object recognition using shape contexts
2003-07-01Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis0.4 Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis
2003-07-01CNN+Gabor Filters0.68 Handwritten Digit Recognition using Convolutional Neural Networks and Gabor Filters
2003-07-01CNN1.19 Convolutional Neural Networks
2006-07-01Energy-Based Sparse Represenation0.39 Efficient Learning of Sparse Representations with an Energy-Based Model
2006-07-01Reducing the dimensionality of data with neural networks1.2 Reducing the dimensionality of data with neural networks
2007-07-01Deformation Models0.54 Deformation Models for Image Recognition
2007-07-01Trainable feature extractor0.54 A trainable feature extractor for handwritten digit recognition
2007-07-01invariant feature hierarchies0.62 Unsupervised learning of invariant feature hierarchies with applications to object recognition
2008-07-01Sparse Coding0.59 Simple Methods for High-Performance Digit Recognition Based on Sparse Coding
2008-07-01DBN1.12 CS81: Learning words with Deep Belief Networks
2008-07-01Deep learning via semi-supervised embedding1.5 Deep learning via semi-supervised embedding
2009-07-01The Best Multi-Stage Architecture0.53 What is the Best Multi-Stage Architecture for Object Recognition?
2009-07-01CDBN0.82 Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations
2009-07-01Large-Margin kNN0.94 Large-Margin kNN Classification using a Deep Encoder Network
2009-07-01Deep Boltzmann Machines0.95 Deep Boltzmann Machines
2010-03-01DBSNN0.35 Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition
2010-07-01Supervised Translation-Invariant Sparse Coding0.84 Supervised Translation-Invariant Sparse Coding
2011-07-01On Optimization Methods for Deep Learning0.69 On Optimization Methods for Deep Learning
2012-06-16MCDNN0.23 Multi-column Deep Neural Networks for Image Classification
2012-06-16Receptive Field Learning0.64 Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features
2013-02-28COSFIRE0.52 Trainable COSFIRE Filters for Keypoint Detection and Pattern Recognition
2013-06-16DropConnect0.21 Regularization of Neural Networks using DropConnect
2013-06-16Maxout Networks0.45 Maxout Networks
2013-07-01Sparse Activity and Sparse Connectivity in Supervised Learning0.75 Sparse Activity and Sparse Connectivity in Supervised Learning
2014-04-14NiN0.47 Network in Network
2014-06-21PCANet0.62 PCANet: A Simple Deep Learning Baseline for Image Classification?
2014-07-01DSN0.39 Deeply-Supervised Nets
2014-07-01StrongNet1.1 StrongNet: mostly unsupervised image recognition with strong neurons
2014-08-28CKN0.39 Convolutional Kernel Networks
2015-02-03Explaining and Harnessing Adversarial Examples0.78 Explaining and Harnessing Adversarial Examples
2015-02-28Fractional MP0.32 Fractional Max-Pooling
2015-03-11C-SVDDNet0.35 C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature Learning
2015-04-05HOPE0.4 Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks
2015-05-13APAC0.23 APAC: Augmented PAttern Classification with Neural Networks
2015-05-31FLSCNN0.37 Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network
2015-06-08RCNN-960.31 Recurrent Convolutional Neural Network for Object Recognition
2015-06-12ReNet0.45 ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks
2015-07-01MLR DNN0.42 Multi-Loss Regularized Deep Neural Network
2015-07-01Deep Fried Convnets0.71 Deep Fried Convnets
2015-07-12DCNN+GFE0.46 Deep Convolutional Neural Networks as Generic Feature Extractors
2015-09-17MIM0.35 ± 0.03On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units
2015-10-05Tree+Max-Avg pooling0.29 Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
2015-11-09BNM NiN0.24 Batch-normalized Maxout Network in Network
2015-11-18CMsC0.33 Competitive Multi-scale Convolution
2015-12-07VDN0.45 Training Very Deep Networks
2015-12-07BinaryConnect1.01 BinaryConnect: Training Deep Neural Networks with binary weights during propagations
2016-01-02Convolutional Clustering1.4 Convolutional Clustering for Unsupervised Learning
2016-01-04Fitnet-LSUV-SVM0.38 All you need is a good init
MSRC-21 image semantic labelling (per-class)
DateAlgorithm% correctPaper / Source
2008-07-01STF67.0 Semantic Texton Forests for Image Categorization and Segmentation
2009-07-01TextonBoost57.0 TextonBoost for Image Understanding
2010-07-01Auto-Context69.0 Auto-Context and Its Application to High-Level Vision Tasks and 3D Brain Image Segmentation
2010-07-01HCRF+CO77.0 Graph Cut based Inference with Co-occurrence Statistics
2011-07-01Are Spatial and Global Constraints Really Necessary for Segmentation?77.0 Are Spatial and Global Constraints Really Necessary for Segmentation?
2011-12-17FC CRF78.0 Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
2012-06-16Describing the Scene as a Whole: Joint Object Detection, Scene Classification and Semantic Segmentation79.0 Describing the Scene as a Whole: Joint Object Detection, Scene Classification and Semantic Segmentation
2012-07-01Harmony Potentials80.0 Harmony Potentials - Fusing Local and Global Scale for Semantic Image Segmentation
2012-10-07PMG72.8 PatchMatchGraph: Building a Graph of Dense Patch Correspondences for Label Transfer
2012-10-07Kernelized SSVM/CRF76.0 Structured Image Segmentation using Kernelized Features
2013-10-29MPP78.2 Morphological Proximity Priors: Spatial Relationships for Semantic Segmentation
2014-07-01Large FC CRF80.9 Large-Scale Semantic Co-Labeling of Image Sets
MSRC-21 image semantic labelling (per-pixel)
DateAlgorithm% correctPaper / Source
2008-07-01STF72.0 Semantic Texton Forests for Image Categorization and Segmentation
2009-07-01TextonBoost72.0 TextonBoost for Image Understanding
2010-07-01Auto-Context78.0 Auto-Context and Its Application to High-Level Vision Tasks and 3D Brain Image Segmentation
2010-07-01HCRF+CO87.0 Graph Cut based Inference with Co-occurrence Statistics
2011-07-01Are Spatial and Global Constraints Really Necessary for Segmentation?85.0 Are Spatial and Global Constraints Really Necessary for Segmentation?
2011-12-17FC CRF86.0 Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
2012-06-16Describing the Scene as a Whole: Joint Object Detection, Scene Classification and Semantic Segmentation86.0 Describing the Scene as a Whole: Joint Object Detection, Scene Classification and Semantic Segmentation
2012-07-01Harmony Potentials83.0 Harmony Potentials - Fusing Local and Global Scale for Semantic Image Segmentation
2012-10-07PatchMatchGraph79.0 PatchMatchGraph: Building a Graph of Dense Patch Correspondences for Label Transfer
2012-10-07Kernelized SSVM/CRF82.0 Structured Image Segmentation using Kernelized Features
2013-10-29MPP85.0 Morphological Proximity Priors: Spatial Relationships for Semantic Segmentation
2014-07-01Large FC CRF86.8 Large-Scale Semantic Co-Labeling of Image Sets
STL-10 Image Recognition
DateAlgorithm% correctPaper / Source
2011-12-17Receptive Fields60.1 Selecting Receptive Fields in Deep Networks
2012-06-26Invariant Representations with Local Transformations58.7 Learning Invariant Representations with Local Transformations
2012-07-01Simulated Fixations61.0 Deep Learning of Invariant Features via Simulated Fixations in Video
2012-07-01RGB-D Based Object Recognition64.5 Unsupervised Feature Learning for RGB-D Based Object Recognition
2012-12-03Deep Learning of Invariant Features via Simulated Fixations in Video56.5 Deep Learning of Invariant Features via Simulated Fixations in Video
2012-12-03Discriminative Learning of Sum-Product Networks62.3 Discriminative Learning of Sum-Product Networks
2013-01-15Pooling-Invariant58.28 Pooling-Invariant Image Feature Learning
2013-07-01Multi-Task Bayesian Optimization70.1 Multi-Task Bayesian Optimization
2014-02-24No more meta-parameter tuning in unsupervised sparse feature learning61.0 No more meta-parameter tuning in unsupervised sparse feature learning
2014-06-21Nonnegativity Constraints 67.9 Stable and Efficient Representation Learning with Nonnegativity Constraints
2014-06-23DFF Committees68.0 Committees of deep feedforward networks trained with few data
2014-08-28CKN62.32 Convolutional Kernel Networks
2014-12-08Discriminative Unsupervised Feature Learning with Convolutional Neural Networks72.8 Discriminative Unsupervised Feature Learning with Convolutional Neural Networks
2015-02-13An Analysis of Unsupervised Pre-training in Light of Recent Advances70.2 An Analysis of Unsupervised Pre-training in Light of Recent Advances
2015-03-11C-SVDDNet68.23 C-SVDDNet: An Effective Single-Layer Network for Unsupervised Feature Learning
2015-07-01Deep Representation Learning with Target Coding73.15 Deep Representation Learning with Target Coding
2015-10-11SWWAE74.33 Stacked What-Where Auto-encoders
2016-01-02Convolutional Clustering74.1 Convolutional Clustering for Unsupervised Learning
2016-11-19CC-GAN²77.79 ± 0.8Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
Street View House Numbers (SVHN)
DateAlgorithm% errorPaper / Source
2012-07-01Convolutional neural networks applied to house numbers digit classification4.9 Convolutional neural networks applied to house numbers digit classification
2013-01-16Stochastic Pooling2.8 Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
2013-06-16Regularization of Neural Networks using DropConnect1.94 Regularization of Neural Networks using DropConnect
2013-06-16Maxout2.47 Maxout Networks
2014-04-14DCNN2.16 Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks
2014-04-14NiN2.35 Network in Network
2014-07-01DSN1.92 Deeply-Supervised Nets
2015-05-31FLSCNN3.96 Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network
2015-06-08RCNN-961.77 Recurrent Convolutional Neural Network for Object Recognition
2015-06-12ReNet2.38 ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks
2015-07-01MLR DNN1.92 Multi-Loss Regularized Deep Neural Network
2015-09-17MIM1.97 ± 0.08On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units
2015-10-05Tree+Max-Avg pooling1.69 Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
2015-11-09BNM NiN1.81 Batch-normalized Maxout Network in Network
2015-11-18CMsC1.76 Competitive Multi-scale Convolution
2015-12-07BinaryConnect2.15 BinaryConnect: Training Deep Neural Networks with binary weights during propagations
2017-05-30Deep Complex3.3 Deep Complex Networks

Visual Question Answering

Comprehending an image involves more than recognising what objects or entities are within it, but recognising events, relationships, and context from the image. This problem requires both sophisticated image recognition, language, world-modelling, and "image comprehension". There are several datasets in use. The illustration is from VQA, which was generated by asking Amazon Mechanical Turk workers to propose questions about photos from Microsoft's COCO image collection.

In [7]:
plot = vqa_real_oe.graph(keep=True, title="COCO Visual Question Answersing (VQA) real open ended", llabel="VQA 1.0")
vqa2_real_oe.graph(reuse=plot, llabel="VQA 2.0", fcol="#00a0a0", pcol="#a000a0")
for m in image_comprehension.metrics:
    m.graph() if not m.graphed else None
In [8]:
HTML(image_comprehension.tables())
Out[8]:
COCO Visual Question Answering (VQA) abstract 1.0 multiple choice
DateAlgorithm% correctPaper / Source
2016-07-01LSTM blind61.41 VQA: Visual Question Answering (algorithm from Yin and Yang: Balancing and Answering Binary Visual Questions)
2016-07-01LSTM + global features69.21 VQA: Visual Question Answering (algorithm from Yin and Yang: Balancing and Answering Binary Visual Questions)
2016-07-01Dualnet ensemble71.18 VQA: Visual Question Answering (algorithm from DualNet: Domain-Invariant Network for Visual Question Answering)
2016-09-19Graph VQA74.37 Graph-Structured Representations for Visual Question Answering
COCO Visual Question Answering (VQA) abstract images 1.0 open ended
DateAlgorithm% correctPaper / Source
2016-07-01LSTM blind57.19 VQA: Visual Question Answering (algorithm from Yin and Yang: Balancing and Answering Binary Visual Questions)
2016-07-01LSTM + global features65.02 VQA: Visual Question Answering (algorithm from Yin and Yang: Balancing and Answering Binary Visual Questions)
2016-07-01Dualnet ensemble69.73 VQA: Visual Question Answering (algorithm from DualNet: Domain-Invariant Network for Visual Question Answering)
2016-09-19Graph VQA70.42 Graph-Structured Representations for Visual Question Answering
COCO Visual Question Answering (VQA) real images 1.0 multiple choice
DateAlgorithm% correctPaper / Source
2015-05-03LSTM Q+I63.1 VQA: Visual Question Answering
2015-12-15iBOWIMG baseline61.97 Simple Baseline for Visual Question Answering
2016-04-06FDA64.2 A Focused Dynamic Attention Model for Visual Question Answering
2016-05-31HQI+ResNet66.1 Hierarchical Co-Attention for Visual Question Answering
2016-06-05MRN66.33 Multimodal Residual Learning for Visual QA
2016-06-06MCB 7 att.70.1 Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding (source code)
2016-08-06joint-loss67.3 Training Recurrent Answering Units with Joint Loss Minimization for VQA
COCO Visual Question Answering (VQA) real images 1.0 open ended
DateAlgorithm% correctPaper / Source
2015-05-03LSTM Q+I58.2 VQA: Visual Question Answering
2015-12-15iBOWIMG baseline55.89 Simple Baseline for Visual Question Answering
2016-01-26SAN58.9 Stacked Attention Networks for Image Question Answering
2016-03-09CNN-RNN59.5 Image Captioning and Visual Question Answering Based on Attributes and Their Related External Knowledge
2016-03-19SMem-VQA58.24 Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
2016-04-06FDA59.5 A Focused Dynamic Attention Model for Visual Question Answering
2016-05-31HQI+ResNet62.1 Hierarchical Co-Attention for Visual Question Answering
2016-06-05MRN + global features61.84 Multimodal Residual Learning for Visual QA
2016-06-06MCB 7 att.66.5 Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding (source code)
2016-08-06joint-loss63.2 Training Recurrent Answering Units with Joint Loss Minimization for VQA
2017-08-06N2NMN64.2 Learning to Reason: End-to-End Module Networks for Visual Question Answering (source code)
COCO Visual Question Answering (VQA) real images 2.0 open ended
DateAlgorithm% correctPaper / Source
2016-12-02d-LSTM+nI54.22 Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering (algorithm from GitHub - VT-vision-lab/VQA_LSTM_CNN: Train a deeper LSTM and normalized CNN Visual Question Answering model. This current code can get 58.16 on OpenEnded and 63.09 on Multiple-Choice on test-standard.)
2016-12-02MCB62.27 Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering (algorithm from Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding)
2017-07-25Up-Down70.34 Bottom-Up and Top-Down Attention for Image Captioning and VQA
2017-07-26DLAIT68.07 VQA: Visual Question Answering
2017-07-26HDU-USYD-UNCC68.16 VQA: Visual Question Answering
Visual7W
DateAlgorithm% correctPaper / Source
2016-06-06MCB+Att.62.2 Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
2016-11-30CMN72.53 Modeling Relationships in Referential Expressions with Compositional Modular Networks

Game Playing

In principle, games are a sufficiently open-ended framework that all of intelligence could be captured within them. We can imagine a "ladder of games" which grow in sophistication and complexity, from simple strategy and arcade games to others which require very sophisticated language, world-modelling, vision and reasoning ability. At present, published reinforcement agents are climbing the first few rungs of this ladder.

Abstract Strategy Games

As an easier case, abstract games like chess, go, checkers etc can be played with no knowldege of the human world or physics. Although this domain has largely been solved to super-human performance levels, there are a few ends that need to be tied up, especially in terms of having agents learn rules for arbitrary abstract games effectively given various plausible starting points (eg, textual descriptions of the rules or examples of correct play).

In [9]:
from data.strategy_games import *
computer_chess.graph()
In [10]:
HTML(computer_chess.table())
Out[10]:
Computer Chess
DateAlgorithmELOPaper / Source
1984-12-31Novag Super Constellation 6502 4 MHz1631 Swedish Chess Computer Association - Wikipedia
1985-12-31Mephisto Amsterdam 68000 12 MHz1827 Swedish Chess Computer Association - Wikipedia
1986-12-31Mephisto Amsterdam 68000 12 MHz1827 Swedish Chess Computer Association - Wikipedia
1987-12-31Mephisto Dallas 68020 14 MHz1923 Swedish Chess Computer Association - Wikipedia
1988-12-31Mephisto MM 4 Turbo Kit 6502 16 MHz1993 Swedish Chess Computer Association - Wikipedia
1989-12-31Mephisto Portorose 68020 12 MHz2027 Swedish Chess Computer Association - Wikipedia
1990-12-31Mephisto Portorose 68030 36 MHz2138 Swedish Chess Computer Association - Wikipedia
1991-12-31Mephisto Vancouver 68030 36 MHz2127 Swedish Chess Computer Association - Wikipedia
1992-12-31Chess Machine Schroder 3.0 ARM2 30 MHz2174 Swedish Chess Computer Association - Wikipedia
1993-12-31Mephisto Genius 2.0 486/50-66 MHz2235 Swedish Chess Computer Association - Wikipedia
1995-12-31MChess Pro 5.0 Pentium 90 MHz2306 Swedish Chess Computer Association - Wikipedia
1996-12-31Rebel 8.0 Pentium 90 MHz2337 Swedish Chess Computer Association - Wikipedia
1997-05-11Deep Blue2725 ± 25What was Deep Blue's Elo rating? - Quora
1997-12-31HIARCS 6.0 49MB P200 MMX2418 Swedish Chess Computer Association - Wikipedia
1998-12-31Fritz 5.0 PB29% 67MB P200 MMX2460 Swedish Chess Computer Association - Wikipedia
1999-12-31Chess Tiger 12.0 DOS 128MB K6-2 450 MHz2594 Swedish Chess Computer Association - Wikipedia
2000-12-31Fritz 6.0 128MB K6-2 450 MHz2607 Swedish Chess Computer Association - Wikipedia
2001-12-31Chess Tiger 14.0 CB 256MB Athlon 12002709 Swedish Chess Computer Association - Wikipedia
2002-12-31Deep Fritz 7.0 256MB Athlon 1200 MHz2759 Swedish Chess Computer Association - Wikipedia
2003-12-31Shredder 7.04 UCI 256MB Athlon 1200 MHz2791 Swedish Chess Computer Association - Wikipedia
2004-12-31Shredder 8.0 CB 256MB Athlon 1200 MHz2800 Swedish Chess Computer Association - Wikipedia
2005-12-31Shredder 9.0 UCI 256MB Athlon 1200 MHz2808 Swedish Chess Computer Association - Wikipedia
2006-05-27Rybka 1.1 64bit2995 ± 25CCRL 40/40 - Complete list
2006-12-31Rybka 1.2 256MB Athlon 1200 MHz2902 Swedish Chess Computer Association - Wikipedia
2007-12-31Rybka 2.3.1 Arena 256MB Athlon 1200 MHz2935 Swedish Chess Computer Association - Wikipedia
2008-12-31Deep Rybka 3 2GB Q6600 2.4 GHz3238 Swedish Chess Computer Association - Wikipedia
2009-12-31Deep Rybka 3 2GB Q6600 2.4 GHz3232 Swedish Chess Computer Association - Wikipedia
2010-08-07Rybka 4 64bit3269 ± 22CCRL 40/40 - Complete list
2010-12-31Deep Rybka 3 2GB Q6600 2.4 GHz3227 Swedish Chess Computer Association - Wikipedia
2011-12-31Deep Rybka 4 2GB Q6600 2.4 GHz3216 Swedish Chess Computer Association - Wikipedia
2012-12-31Deep Rybka 4 x64 2GB Q6600 2.4 GHz3221 Swedish Chess Computer Association - Wikipedia
2013-07-20Houdini 3 64bit3248 ± 16Wayback Machine
2013-12-31Komodo 5.1 MP x64 2GB Q6600 2.4 GHz3241 Swedish Chess Computer Association - Wikipedia
2014-12-31Komodo 7.0 MP x64 2GB Q6600 2.4 GHz3295 Swedish Chess Computer Association - Wikipedia
2015-07-04Komodo 93332 ± 24CCRL 40/40 - Complete list
2015-12-31Stockfish 6 MP x64 2GB Q6600 2.4 GHz3334 Swedish Chess Computer Association - Wikipedia
2016-12-31Komodo 9.1 MP x64 2GB Q6600 2.4 GHz3366 Swedish Chess Computer Association - Wikipedia
2017-02-27Stockfish3393 ± 50CCRL 40/40 - Index

Real-time video games

Computer and video games are a very open-ended domain. It is possible that some existing or future games could be so elaborate that they are "AI complete". In the mean time, a lot of interesting progress is likely in exploring the "ladder of games" of increasing complexity on various fronts.

Atari 2600

Atari 2600 games have been a popular target for reinforcement learning, especially at DeepMind and OpenAI. RL agents now play most but not all of these games better than humans.

In the Atari 2600 data, the label "noop" indicates the game was played with a random number, up to 30, of "no-op" moves at the beginning, while the "hs" label indicates that the starting condition was a state sampled from 100 games played by expert human players. These forms of randomisation give RL systems a diversity of game states to learn from.

In [11]:
from data.video_games import *
from scrapers.atari import *
simple_games.graphs()
In [12]:
HTML(simple_games.tables())
Out[12]:
Atari 2600 Alien
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA103.2 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear939.2 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN3069.0 ± 1093.0Human-level control through deep reinforcement learning
2015-07-15Gorila813.5 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs634.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop1620.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs1486.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop3747.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop4461.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs823.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs1033.4 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs1334.7 Prioritized Experience Replay
2016-01-06Prior noop4203.8 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop3213.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop3941.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs182.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs518.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs945.3 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop994.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop3166.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Amidar
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA183.6 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear103.4 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN739.5 ± 3024.0Human-level control through deep reinforcement learning
2015-07-15Gorila189.2 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs178.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop978.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs172.7 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop1793.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop2354.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs169.1 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs238.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs129.1 Prioritized Experience Replay
2016-01-06Prior noop1838.9 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop782.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop2296.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs173.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs263.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs283.9 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop112.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop1735.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Assault
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA537.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear628.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN3359.0 ± 775.0Human-level control through deep reinforcement learning
2015-07-15Gorila1195.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs3489.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop4280.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs3994.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop4621.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop5393.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08DDQN (tuned) hs6060.8 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs10950.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs6548.9 Prioritized Experience Replay
2016-01-06Prior noop7672.1 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop9011.6 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop11477.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs3746.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs5474.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs14497.9 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1673.9 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop7203.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Asterix
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA1332.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear987.3 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN6012.0 ± 1744.0Human-level control through deep reinforcement learning
2015-07-15Gorila3324.7 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs3170.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop4359.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs15840.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop17356.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop28188.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs16837.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs364200.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs22484.5 Prioritized Experience Replay
2016-01-06Prior noop31527.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop18919.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop375080.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs6723.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs17244.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs22140.5 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1440.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop406211.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Asteroids
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA89.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear907.3 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN1629.0 ± 542.0Human-level control through deep reinforcement learning
2015-07-15Gorila933.6 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop1364.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs1458.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop734.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs2035.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop2837.7 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs1021.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs1193.2 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs1745.1 Prioritized Experience Replay
2016-01-06Prior noop2654.3 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop2869.3 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop1192.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs3009.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs4474.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs5093.1 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1562.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop1516.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Atlantis
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA852.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear62687.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN85641.0 ± 17600.0Human-level control through deep reinforcement learning
2015-07-15Gorila629166.5 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop279987.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs292491.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop106056.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop382572.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs445360.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs319688.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs423252.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs330647.0 Prioritized Experience Replay
2016-01-06Prior noop357324.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop340076.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop395762.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs772392.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs875822.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs911091.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1267410.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop841075.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Bank Heist
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA67.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear190.8 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN429.7 ± 650.0Human-level control through deep reinforcement learning
2015-07-15Gorila399.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs312.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop455.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop1030.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs1129.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop1611.9 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs886.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs1004.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs876.6 Prioritized Experience Replay
2016-01-06Prior noop1054.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop1103.3 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop1503.1 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs932.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs946.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs970.1 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop225.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop976.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Battle Zone
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA16.2 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear15820.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN26300.0 ± 7725.0Human-level control through deep reinforcement learning
2015-07-15Gorila19938.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs23750.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop29900.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs31320.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop31700.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop37150.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs24740.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs30650.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs25520.0 Prioritized Experience Replay
2016-01-06Prior noop31530.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop8220.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop35520.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs11340.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs12950.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs20760.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop16600.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop28742.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Beam Rider
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA1743.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear929.4 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best5184 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN6846.0 ± 1619.0Human-level control through deep reinforcement learning
2015-07-15Gorila3822.1 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop8627.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs9743.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop12164.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop13772.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs14591.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs17417.2 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs37412.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop23384.2 Prioritized Experience Replay
2016-01-06Prior hs31181.3 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop8299.4 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop30276.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs13235.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs22707.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs24622.2 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop744.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop14074.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Berzerk
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs493.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop585.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs910.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop1225.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop1472.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs1011.1 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs2178.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs865.9 Prioritized Experience Replay
2016-01-06Prior noop1305.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop1199.6 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop3409.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs817.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs862.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs1433.4 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop686.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop1645.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Bowling
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA36.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear43.9 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN42.4 ± 88.0Human-level control through deep reinforcement learning
2015-07-15Gorila54.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop50.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs56.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop65.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs65.7 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop68.1 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08Prior+Duel hs50.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs69.6 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior noop47.9 Prioritized Experience Replay
2016-01-06Prior hs52.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop102.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop46.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs35.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs36.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs41.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop30.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop81.8 A Distributional Perspective on Reinforcement Learning
Atari 2600 Boxing
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA9.8 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear44.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN71.8 ± 8.0Human-level control through deep reinforcement learning
2015-07-15Gorila74.2 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs70.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop88.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs77.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop91.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop99.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs73.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs79.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs72.3 Prioritized Experience Replay
2016-01-06Prior noop95.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop99.3 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop98.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs33.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs37.3 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs59.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop49.8 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop97.8 A Distributional Perspective on Reinforcement Learning
Atari 2600 Breakout
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA6.1 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear5.2 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best225 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN401.2 ± 26.0Human-level control through deep reinforcement learning
2015-07-15Gorila313.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs354.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop385.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop345.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs411.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop418.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08Prior+Duel hs354.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs368.9 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs343.0 Prioritized Experience Replay
2016-01-06Prior noop373.9 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop344.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop366.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs551.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs681.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs766.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop9.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop748.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Centipede
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA4647.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear8803.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN8309.0 ± 5237.0Human-level control through deep reinforcement learning
2015-07-15Gorila6296.9 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs3973.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop4657.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs4881.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop5409.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop7561.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs3853.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs5570.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs3489.1 Prioritized Experience Replay
2016-01-06Prior noop4463.2 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop49065.8 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop7687.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs1997.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs3306.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs3755.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop7783.9 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop9646.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Chopper Command
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA16.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1582.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN6687.0 ± 2916.0Human-level control through deep reinforcement learning
2015-07-15Gorila3191.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs5017.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop6126.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs3784.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop5809.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop11215.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs3495.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs8058.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs4635.0 Prioritized Experience Replay
2016-01-06Prior noop8600.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop775.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop13185.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs4669.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs7021.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs10150.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop3710.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop15600.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Crazy Climber
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA149.8 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear23411.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN114103.0 ± 22797.0Human-level control through deep reinforcement learning
2015-07-15Gorila65451.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs98128.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop110763.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop117282.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs124566.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop143570.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs113782.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs127853.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs127512.0 Prioritized Experience Replay
2016-01-06Prior noop141161.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop119679.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop162224.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs101624.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs112646.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs138518.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop26430.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop179877.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Defender
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs15917.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop23633.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs33996.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop35338.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop42214.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs27510.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs34415.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs23666.5 Prioritized Experience Replay
2016-01-06Prior noop31286.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop11099.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop41324.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs36242.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs56533.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs233021.5 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop47092.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Demon Attack
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA0.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear520.5 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN9711.0 ± 2406.0Human-level control through deep reinforcement learning
2015-07-15Gorila14880.1 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop12149.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs12550.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs56322.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop58044.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop60813.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs69803.4 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs73371.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs61277.5 Prioritized Experience Replay
2016-01-06Prior noop71846.4 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop63644.9 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop72878.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs84997.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs113308.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs115201.9 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1166.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop130955.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Double Dunk
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA-16.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear-13.1 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN-18.1 ± 2.0Human-level control through deep reinforcement learning
2015-07-15Gorila-11.3 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop-6.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs-6.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop-5.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs-0.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop0.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs-10.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs-0.3 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs16.0 Prioritized Experience Replay
2016-01-06Prior noop18.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop-11.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop-12.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs-0.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs0.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs0.1 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop0.2 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop2.5 A Distributional Perspective on Reinforcement Learning
Atari 2600 Enduro
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA159.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear129.1 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best661 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN301.8 ± 24.0Human-level control through deep reinforcement learning
2015-07-15Gorila71.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs626.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop729.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop1211.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs2077.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop2258.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs1216.6 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs2223.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs1831.0 Prioritized Experience Replay
2016-01-06Prior noop2093.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop2002.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop2306.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs-82.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs-82.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs-82.2 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop95.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop3454.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Fishing Derby
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA-85.1 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear-89.5 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN-0.8 ± 19.0Human-level control through deep reinforcement learning
2015-07-15Gorila4.6 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop-4.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs-1.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs-4.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop15.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop46.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs3.2 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs17.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs9.8 Prioritized Experience Replay
2016-01-06Prior noop39.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop45.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop41.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs13.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs18.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs22.6 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop-49.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop8.9 A Distributional Perspective on Reinforcement Learning
Atari 2600 Freeway
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA19.7 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear19.1 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN30.3 Human-level control through deep reinforcement learning
2015-07-15Gorila10.2 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-10MP-EB27.0 Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
2015-09-22DQN hs26.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop30.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs0.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop33.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08Prior+Duel hs28.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs28.8 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs28.9 Prioritized Experience Replay
2016-01-06Prior noop33.7 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop33.4 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop33.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs0.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs0.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs0.1 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS30.48 Unifying Count-Based Exploration and Intrinsic Motivation
2016-12-13TRPO-hash34.0 Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
2017-03-03DQN-PixelCNN31.7 Count-Based Exploration with Neural Density Models
2017-03-03DQN-CTS33.0 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop31.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-06-25Sarsa-φ-EB0.0 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-06-25Sarsa-ε29.9 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-07-21C51 noop33.9 A Distributional Perspective on Reinforcement Learning
Atari 2600 Frostbite
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA180.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear216.9 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN328.3 ± 250.0Human-level control through deep reinforcement learning
2015-07-15Gorila426.6 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-10MP-EB507.0 Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
2015-09-22DQN hs496.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop797.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop1683.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs2332.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop4672.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs1448.1 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs4038.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs3510.0 Prioritized Experience Replay
2016-01-06Prior noop4380.1 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop3469.6 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop7413.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs180.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs190.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs197.6 Asynchronous Methods for Deep Reinforcement Learning
2016-12-13TRPO-hash5214.0 Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop370.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-06-25Sarsa-ε1394.3 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-06-25Sarsa-φ-EB2770.1 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-07-21C51 noop3965.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Gopher
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA2368.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1288.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN8520.0 ± 3279.0Human-level control through deep reinforcement learning
2015-07-15Gorila4373.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs8190.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop8777.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop14840.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop15718.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs20051.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs15253.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs105148.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop32487.2 Prioritized Experience Replay
2016-01-06Prior hs34858.8 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop56218.2 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop104368.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs8442.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs10022.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs17106.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop582.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop33641.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Gravitar
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA429.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear387.7 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN306.7 ± 223.0Human-level control through deep reinforcement learning
2015-07-15Gorila538.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs298.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop473.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs297.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop412.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop588.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs167.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs200.5 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs269.5 Prioritized Experience Replay
2016-01-06Prior noop548.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop483.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop238.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs269.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs303.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs320.0 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS238.68 Unifying Count-Based Exploration and Intrinsic Motivation
2017-03-03DQN-CTS238.0 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN498.3 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop805.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop440.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 HERO
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA7295.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear6459.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN19950.0 ± 158.0Human-level control through deep reinforcement learning
2015-07-15Gorila8963.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs14992.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop20437.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs15207.9 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop20130.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop20818.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs14892.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs15459.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs20889.9 Prioritized Experience Replay
2016-01-06Prior noop23037.7 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop14225.2 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop21036.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs28765.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs28889.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs32464.1 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop38874.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Ice Hockey
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA-3.2 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear-9.5 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN-1.6 ± 2.0Human-level control through deep reinforcement learning
2015-07-15Gorila-1.7 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop-1.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs-1.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop-2.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs-1.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop0.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs-2.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs0.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs-0.2 Prioritized Experience Replay
2016-01-06Prior noop1.3 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop-4.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop-0.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs-4.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs-2.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs-1.7 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop-4.1 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop-3.5 A Distributional Perspective on Reinforcement Learning
Atari 2600 James Bond
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA354.1 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear202.8 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN576.7 ± 175.0Human-level control through deep reinforcement learning
2015-07-15Gorila444.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs697.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop768.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs835.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop1312.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop1358.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08DDQN (tuned) hs573.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs585.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs3961.0 Prioritized Experience Replay
2016-01-06Prior noop5148.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop507.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop812.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs351.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs541.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs613.0 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop1909.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Kangaroo
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA8.8 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1622.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN6740.0 ± 2959.0Human-level control through deep reinforcement learning
2015-07-15Gorila1431.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs4496.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop7259.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs10334.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop12992.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop14854.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs861.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs11204.0 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs12185.0 Prioritized Experience Replay
2016-01-06Prior noop16200.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop13150.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop1792.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs94.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs106.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs125.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop11200.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop12853.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Krull
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA3341.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear3372.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN3805.0 ± 1033.0Human-level control through deep reinforcement learning
2015-07-15Gorila6363.1 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs6206.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop8422.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop7920.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs8051.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop11451.9 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs6796.1 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs7658.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs6872.8 Prioritized Experience Replay
2016-01-06Prior noop9728.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop9745.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop10374.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs5560.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs5911.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs8066.6 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop8647.2 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop9735.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Kung-Fu Master
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA29151.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear19544.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN23270.0 ± 5955.0Human-level control through deep reinforcement learning
2015-07-15Gorila20620.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs20882.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop26059.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs24288.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop29710.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop34294.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs30207.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs37484.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs31676.0 Prioritized Experience Replay
2016-01-06Prior noop39581.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop34393.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop48375.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs3046.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs28819.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs40835.0 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop48192.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Montezuma's Revenge
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA259.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear10.7 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN0.0 Human-level control through deep reinforcement learning
2015-07-15Gorila84.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-10MP-EB142.0 Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
2015-09-22DQN noop0.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs47.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop0.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs22.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs24.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs42.0 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior noop0.0 Prioritized Experience Replay
2016-01-06Prior hs51.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop0.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs41.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs53.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs67.0 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS273.7 Unifying Count-Based Exploration and Intrinsic Motivation
2016-08-22DDQN-PC3459.0 Unifying Count-Based Exploration and Intrinsic Motivation
2016-12-13TRPO-hash75.0 Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
2017-03-03DQN-CTS0.0 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN3705.5 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop0.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-06-25Sarsa-ε399.5 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-06-25Sarsa-φ-EB2745.4 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-07-21C51 noop0.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Ms. Pacman
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA1227.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1692.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN2311.0 ± 525.0Human-level control through deep reinforcement learning
2015-07-15Gorila1263.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs1092.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop3085.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs2250.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop2711.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop6283.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs1007.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs1241.3 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs1865.9 Prioritized Experience Replay
2016-01-06Prior noop6518.7 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop4963.8 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop3327.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs594.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs653.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs850.7 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop3415.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Name This Game
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA2247.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear2500.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN7257.0 ± 547.0Human-level control through deep reinforcement learning
2015-07-15Gorila9238.5 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs6738.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop8207.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop10616.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs11185.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop11971.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs8960.3 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs13637.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs10497.6 Prioritized Experience Replay
2016-01-06Prior noop12270.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop15851.2 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop15572.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs5614.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs10476.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs12093.7 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop4503.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop12542.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Phoenix
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs7484.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop8485.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop12252.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs20410.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop23092.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs12366.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs63597.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs16903.6 Prioritized Experience Replay
2016-01-06Prior noop18992.7 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop6202.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop70324.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs28181.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs52894.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs74786.7 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop4041.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop17490.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Pit Fall
DateAlgorithmRaw ScorePaper / Source
2016-04-10A3C LSTM hs-135.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs-123.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs-78.5 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop0.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Atari 2600 Pitfall!
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN noop-286.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs-113.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs-46.9 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop-29.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs-243.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs-186.7 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs-427.0 Prioritized Experience Replay
2016-01-06Prior noop-356.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop-2.6 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-08-22A3C-CTS-259.09 Unifying Count-Based Exploration and Intrinsic Motivation
2017-03-03DQN-CTS0.0 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN0.0 Count-Based Exploration with Neural Density Models
2017-07-21C51 noop0.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Pong
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA-17.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear-19.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best21 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN18.9 ± 1.0Human-level control through deep reinforcement learning
2015-07-15Gorila16.7 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs18.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop19.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs18.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop20.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop21.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs18.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs19.1 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs18.9 Prioritized Experience Replay
2016-01-06Prior noop20.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop20.6 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop20.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs5.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs10.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs11.4 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop21.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop20.9 A Distributional Perspective on Reinforcement Learning
Atari 2600 Private Eye
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA86.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear684.3 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN1788.0 ± 5473.0Human-level control through deep reinforcement learning
2015-07-15Gorila2598.6 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN noop146.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs207.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop103.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop129.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs292.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs-575.5 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs1277.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop200.0 Prioritized Experience Replay
2016-01-06Prior hs670.7 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop286.7 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop206.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs194.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs206.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs421.1 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS99.32 Unifying Count-Based Exploration and Intrinsic Motivation
2017-03-03DQN-CTS206.0 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN8358.7 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop100.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop15095.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Q*Bert
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA960.3 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear613.5 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best4500 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN10596.0 ± 3294.0Human-level control through deep reinforcement learning
2015-07-15Gorila7089.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-10MP-EB15805.0 Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
2015-09-22DQN hs9271.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop13117.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs14175.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop15088.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop19220.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs11020.8 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs14063.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs9944.0 Prioritized Experience Replay
2016-01-06Prior noop16256.5 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop5236.8 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop18760.3 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs13752.3 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs15148.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs21307.5 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop147.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-06-25Sarsa-ε3895.3 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-06-25Sarsa-φ-EB4111.8 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-07-21C51 noop23784.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 River Raid
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA2650.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1904.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN8316.0 ± 1049.0Human-level control through deep reinforcement learning
2015-07-15Gorila5310.3 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs4748.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop7377.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop14884.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs16569.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop21162.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs10838.4 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs16496.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs11807.2 Prioritized Experience Replay
2016-01-06Prior noop14522.3 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop12530.8 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop20607.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs6591.9 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs10001.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs12201.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop5009.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop17322.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Road Runner
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA89.1 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear67.7 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN18257.0 ± 4268.0Human-level control through deep reinforcement learning
2015-07-15Gorila43079.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs35215.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop39544.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop44127.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs58549.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop69524.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs43156.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs54630.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs52264.0 Prioritized Experience Replay
2016-01-06Prior noop57608.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop47770.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop62151.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs31769.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs34216.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs73949.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop16590.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop55839.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Robotank
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA12.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear28.7 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN51.6 ± 4.0Human-level control through deep reinforcement learning
2015-07-15Gorila61.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs58.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop63.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs62.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop65.1 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop65.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs24.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs59.1 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs56.2 Prioritized Experience Replay
2016-01-06Prior noop62.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop64.3 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop27.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs2.3 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs2.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs32.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop11.9 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop52.3 A Distributional Perspective on Reinforcement Learning
Atari 2600 Seaquest
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA675.5 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear664.8 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best1740 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN5286.0 ± 1310.0Human-level control through deep reinforcement learning
2015-07-15Gorila10145.9 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs4216.7 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop5860.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop16452.7 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs37361.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop50254.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs1431.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs14498.0 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs25463.7 Prioritized Experience Replay
2016-01-06Prior noop26357.8 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop10932.3 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop931.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs1326.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs2300.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs2355.4 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1390.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop266434.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Skiing
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN noop-13062.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN hs-12142.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs-11928.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop-9021.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop-8857.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs-18955.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs-11490.4 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs-10169.1 Prioritized Experience Replay
2016-01-06Prior noop-9996.9 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop-13585.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop-19949.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs-14863.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs-13700.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs-10911.1 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop-15442.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop-13901.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Solaris
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs1295.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop3482.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs1768.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop2250.8 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop3067.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08Prior+Duel hs280.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs810.0 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs2272.8 Prioritized Experience Replay
2016-01-06Prior noop4309.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop4544.8 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop133.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs1884.8 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs1936.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs1956.0 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS2270.15 Unifying Count-Based Exploration and Intrinsic Motivation
2017-03-03DQN-CTS133.4 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN2863.6 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop2090.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop8342.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Space Invaders
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA267.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear250.1 The Arcade Learning Environment: An Evaluation Platform for General Agents
2013-12-19DQN best1075 Playing Atari with Deep Reinforcement Learning
2015-02-26Nature DQN1976.0 ± 893.0Human-level control through deep reinforcement learning
2015-07-15Gorila1183.3 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs1293.8 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop1692.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop2525.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs5993.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop6427.3 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs2628.7 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs8978.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop2865.8 Prioritized Experience Replay
2016-01-06Prior hs3912.1 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop2589.7 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop15311.5 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs2214.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs15730.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs23846.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop678.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop5747.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Star Gunner
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA9.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1070.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN57997.0 ± 3152.0Human-level control through deep reinforcement learning
2015-07-15Gorila14919.2 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs52970.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop54282.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop60142.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop89238.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs90804.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs58365.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs127073.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs61582.0 Prioritized Experience Replay
2016-01-06Prior noop63302.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop589.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop125117.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs64393.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs138218.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs164766.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop1470.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop49095.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Surround
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs-6.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop-5.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop-2.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs4.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop4.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs-0.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs1.9 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs5.9 Prioritized Experience Replay
2016-01-06Prior noop8.9 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop-2.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop1.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF hs-9.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs-9.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs-8.3 Asynchronous Methods for Deep Reinforcement Learning
2017-07-21C51 noop6.8 A Distributional Perspective on Reinforcement Learning
Atari 2600 Tennis
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA0.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear-0.1 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN-2.5 ± 1.0Human-level control through deep reinforcement learning
2015-07-15Gorila-0.7 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs11.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop12.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop-22.8 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs4.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop5.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs-13.2 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs-7.8 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs-5.3 Prioritized Experience Replay
2016-01-06Prior noop0.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop12.1 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop0.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs-10.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs-6.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs-6.3 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop-4.5 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop23.1 A Distributional Perspective on Reinforcement Learning
Atari 2600 Time Pilot
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA24.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear3741.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN5947.0 ± 1600.0Human-level control through deep reinforcement learning
2015-07-15Gorila8267.8 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs4786.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop4870.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs6601.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop8339.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop11666.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08Prior+Duel hs4871.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2015-12-08DDQN (tuned) hs6608.0 Deep Reinforcement Learning with Double Q-learning
2016-01-06Prior hs5963.0 Prioritized Experience Replay
2016-01-06Prior noop9197.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop4870.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop7553.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs5825.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs12679.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs27202.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop4970.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop8329.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Tutankham
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA98.2 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear114.3 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN186.7 ± 41.0Human-level control through deep reinforcement learning
2015-07-15Gorila118.5 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs45.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop68.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs48.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop211.4 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop218.4 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08DDQN (tuned) hs92.2 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs108.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs56.9 Prioritized Experience Replay
2016-01-06Prior noop204.6 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop183.9 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop245.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs26.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs144.2 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs156.3 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop130.3 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop280.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Up and Down
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA2449.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear3533.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN8456.0 ± 3162.0Human-level control through deep reinforcement learning
2015-07-15Gorila8747.7 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs8038.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop9989.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop22972.2 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs24759.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop44939.6 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs19086.9 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs22681.3 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs12157.4 Prioritized Experience Replay
2016-01-06Prior noop16154.1 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop22474.4 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop33879.1 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs54525.4 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs74705.7 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs105728.7 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop67974.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop15612.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Venture
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA0.6 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear66.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN380.0 ± 238.0Human-level control through deep reinforcement learning
2015-07-15Gorila523.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-10MP-EB0.0 Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
2015-09-22DQN hs136.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop163.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop98.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs200.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop497.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs21.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs29.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop54.0 Prioritized Experience Replay
2016-01-06Prior hs94.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop1172.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop48.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs19.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs23.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs25.0 Asynchronous Methods for Deep Reinforcement Learning
2016-08-22A3C-CTS0.0 Unifying Count-Based Exploration and Intrinsic Motivation
2016-12-13TRPO-hash445.0 Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
2017-03-03DQN-CTS48.0 Count-Based Exploration with Neural Density Models
2017-03-03DQN-PixelCNN82.2 Count-Based Exploration with Neural Density Models
2017-03-10ES FF (1 hour) noop760.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-06-25Sarsa-ε0.0 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-06-25Sarsa-φ-EB1169.2 Count-Based Exploration in Feature Space for Reinforcement Learning
2017-07-21C51 noop1520.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Video Pinball
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA19761.0 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear16871.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN42684.0 ± 16287.0Human-level control through deep reinforcement learning
2015-07-15Gorila112093.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs154414.1 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop196760.4 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel noop98209.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel hs110976.2 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop309941.9 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-12-08DDQN (tuned) hs367823.7 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs447408.6 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop282007.3 Prioritized Experience Replay
2016-01-06Prior hs295972.8 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop56287.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop479197.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs185852.6 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs331628.1 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs470310.5 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop22834.8 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop949604.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Wizard of Wor
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA36.9 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear1981.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN3393.0 ± 2019.0Human-level control through deep reinforcement learning
2015-07-15Gorila10431.0 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs1609.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop2704.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20Duel hs7054.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20DDQN (tuned) noop7492.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel noop7855.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs6201.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs10471.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior noop4802.0 Prioritized Experience Replay
2016-01-06Prior hs5727.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop483.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop12352.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs5278.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs17244.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs18082.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop3480.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop9300.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Yars Revenge
DateAlgorithmRaw ScorePaper / Source
2015-09-22DQN hs4577.5 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop18098.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop11712.6 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs25976.5 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop49622.1 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs6270.6 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs58145.9 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs4687.4 Prioritized Experience Replay
2016-01-06Prior noop11357.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop21409.5 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop69618.1 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C LSTM hs5615.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs7157.5 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF (1 day) hs7270.8 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop16401.7 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop35050.0 A Distributional Perspective on Reinforcement Learning
Atari 2600 Zaxxon
DateAlgorithmRaw ScorePaper / Source
2012-07-14SARSA21.4 Investigating Contingency Awareness Using Atari 2600 Games
2012-07-19Best linear3365.0 The Arcade Learning Environment: An Evaluation Platform for General Agents
2015-02-26Nature DQN4977.0 ± 1235.0Human-level control through deep reinforcement learning
2015-07-15Gorila6159.4 Massively Parallel Methods for Deep Reinforcement Learning
2015-09-22DQN hs4412.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-09-22DQN noop5363.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Human-level control through deep reinforcement learning)
2015-11-20DDQN (tuned) noop10163.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Deep Reinforcement Learning with Double Q-learning)
2015-11-20Duel hs10164.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-11-20Duel noop12944.0 Dueling Network Architectures for Deep Reinforcement Learning
2015-12-08DDQN (tuned) hs8593.0 Deep Reinforcement Learning with Double Q-learning
2015-12-08Prior+Duel hs11320.0 Deep Reinforcement Learning with Double Q-learning (algorithm from Prioritized Experience Replay)
2016-01-06Prior hs9474.0 Prioritized Experience Replay
2016-01-06Prior noop10469.0 Prioritized Experience Replay
2016-02-24DDQN+Pop-Art noop14402.0 Learning functions across many orders of magnitudes
2016-04-05Prior+Duel noop13886.0 Dueling Network Architectures for Deep Reinforcement Learning (algorithm from Prioritized Experience Replay)
2016-04-10A3C FF (1 day) hs2659.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C LSTM hs23519.0 Asynchronous Methods for Deep Reinforcement Learning
2016-04-10A3C FF hs24622.0 Asynchronous Methods for Deep Reinforcement Learning
2017-03-10ES FF (1 hour) noop6380.0 Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2017-07-21C51 noop10513.0 A Distributional Perspective on Reinforcement Learning

Speech recognition

In [13]:
from data.acoustics import *
from data.wer import *
speech_recognition.graphs()
In [14]:
HTML(speech_recognition.tables())
Out[14]:
Word error rate on Switchboard trained against the Hub5'00 dataset
DateAlgorithm% errorPaper / Source
2011-08-31CD-DNN16.1 https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CD-DNN-HMM-SWB-Interspeech2011-Pub.pdf
2012-04-27DNN-HMM18.5 https://pdfs.semanticscholar.org/ce25/00257fda92338ec0a117bea1dbc0381d7c73.pdf?_ga=1.195375081.452266805.1483390947
2013-08-23CNN11.5 http://www.cs.toronto.edu/~asamir/papers/icassp13_cnn.pdf
2013-08-23HMM-DNN +sMBR12.6 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2013-08-25DNN sMBR12.6 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2013-08-25DNN MMI12.9 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2013-08-25DNN MPE12.9 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2013-08-25DNN BMMI12.9 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2014-06-23DNN + Dropout15 Building DNN Acoustic Models for Large Vocabulary Speech Recognition
2014-06-30DNN16 Increasing Deep Neural Network Acoustic Model Size for Large Vocabulary Continuous Speech Recognition
2014-08-23CNN on MFSC/fbanks + 1 non-conv layer for FMLLR/I-Vectors concatenated in a DNN10.4 http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202014/papers/p5609-soltau.pdf
2014-12-07Deep Speech + FSH12.6 Deep Speech: Scaling up end-to-end speech recognition
2014-12-07Deep Speech20 Deep Speech: Scaling up end-to-end speech recognition
2014-12-23CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWB12.6 Deep Speech: Scaling up end-to-end speech recognition
2015-05-21IBM 20158.0 The IBM 2015 English Conversational Telephone Speech Recognition System
2015-08-23HMM-TDNN + iVectors11 http://speak.clsp.jhu.edu/uploads/publications/papers/1048_pdf.pdf
2015-08-23HMM-TDNN + pNorm + speed up/down speech12.9 http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf
2015-09-23Deep CNN (10 conv, 4 FC layers), multi-scale feature maps12.2 Very Deep Multilingual Convolutional Neural Networks for LVCSR
2016-04-27IBM 20166.9 The IBM 2016 English Conversational Telephone Speech Recognition System
2016-06-23RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model6.6 The IBM 2016 English Conversational Telephone Speech Recognition System
2016-09-23VGG/Resnet/LACE/BiLSTM acoustic model trained on SWB+Fisher+CH, N-gram + RNNLM language model trained on Switchboard+Fisher+Gigaword+Broadcast6.3 The Microsoft 2016 Conversational Speech Recognition System
2016-09-23HMM-BLSTM trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher8.5 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
2016-09-23HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher (10% / 15.1% respectively trained on SWBD only)9.2 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
2016-10-17CNN-LSTM6.6 Achieving Human Parity in Conversational Speech Recognition
2016-12-17Microsoft 2016b5.8 Achieving Human Parity in Conversational Speech Recognition
2017-02-17Microsoft 20166.2 The Microsoft 2016 Conversational Speech Recognition System
2017-02-17RNNLM6.9 The Microsoft 2016 Conversational Speech Recognition System
2017-03-23ResNet + BiLSTMs acoustic model, with 40d FMLLR + i-Vector inputs, trained on SWB+Fisher+CH, n-gram + model-M + LSTM + Strided ( trous) convs-based LM trained on Switchboard+Fisher+Gigaword+Broadcast5.5 English Conversational Telephone Speech Recognition by Humans and Machines
chime clean
DateAlgorithm% errorPaper / Source
2014-12-23CNN + Bi-RNN + CTC (speech to letters)6.3 Deep Speech: Scaling up end-to-end speech recognition
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters3.34 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
chime real
DateAlgorithm% errorPaper / Source
2014-12-23CNN + Bi-RNN + CTC (speech to letters)67.94 Deep Speech: Scaling up end-to-end speech recognition
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters21.79 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
fisher WER
DateAlgorithm% errorPaper / Source
2016-09-23HMM-BLSTMtrained with MMI + data augmentation (speed) + iVectors + 3 regularizations + SWBD9.6 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
2016-09-23HMM-TDNNtrained with MMI + data augmentation (speed) + iVectors + 3 regularizations + SWBD9.8 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
librispeech WER testclean
DateAlgorithm% errorPaper / Source
2015-08-23HMM-TDNN + iVectors4.83 http://speak.clsp.jhu.edu/uploads/publications/papers/1048_pdf.pdf
2015-08-23HMM-DNN + pNorm*5.51 http://www.danielpovey.com/files/2015_icassp_librispeech.pdf
2015-08-23HMM-(SAT)GMM8.01 Kaldi ASR
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters trained on 11940h5.33 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
2016-09-23HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations4.28 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
librispeech WER testother
DateAlgorithm% errorPaper / Source
2015-08-23TDNN + pNorm + speed up/down speech12.51 http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf
2015-08-23HMM-DNN + pNorm*13.97 http://www.danielpovey.com/files/2015_icassp_librispeech.pdf
2015-08-23HMM-(SAT)GMM22.49 Kaldi ASR
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters trained on 11940h13.25 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
swb_hub_500 WER fullSWBCH
DateAlgorithm% errorPaper / Source
2013-08-23HMM-DNN +sMBR18.4 http://www.danielpovey.com/files/2013_interspeech_dnn.pdf
2014-06-23DNN + Dropout19.1 Building DNN Acoustic Models for Large Vocabulary Speech Recognition
2014-12-23CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWB16 Deep Speech: Scaling up end-to-end speech recognition
2015-08-23HMM-TDNN + iVectors17.1 http://speak.clsp.jhu.edu/uploads/publications/papers/1048_pdf.pdf
2015-08-23HMM-TDNN + pNorm + speed up/down speech19.3 http://www.danielpovey.com/files/2015_interspeech_augmentation.pdf
2016-06-23RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model12.2 The IBM 2016 English Conversational Telephone Speech Recognition System
2016-09-23VGG/Resnet/LACE/BiLSTM acoustic model trained on SWB+Fisher+CH, N-gram + RNNLM language model trained on Switchboard+Fisher+Gigaword+Broadcast11.9 The Microsoft 2016 Conversational Speech Recognition System
2016-09-23HMM-BLSTM trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher13 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
2016-09-23HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher (10% / 15.1% respectively trained on SWBD only)13.3 http://www.danielpovey.com/files/2016_interspeech_mmi.pdf
2017-03-23ResNet + BiLSTMs acoustic model, with 40d FMLLR + i-Vector inputs, trained on SWB+Fisher+CH, n-gram + model-M + LSTM + Strided ( trous) convs-based LM trained on Switchboard+Fisher+Gigaword+Broadcast10.3 English Conversational Telephone Speech Recognition by Humans and Machines
timit PER
DateAlgorithm% errorPaper / Source
2009-08-23(first, modern) HMM-DBN23 http://www.cs.toronto.edu/~asamir/papers/NIPS09.pdf
2013-03-23Bi-LSTM + skip connections w/ CTC17.7 Speech Recognition with Deep Recurrent Neural Networks
2014-08-23CNN in time and frequency + dropout, 17.6% w/o dropout16.7 http://www.inf.u-szeged.hu/~tothl/pubs/ICASSP2014.pdf
2015-06-23Bi-RNN + Attention17.6 Attention-Based Models for Speech Recognition
2015-09-23Hierarchical maxout CNN + Dropout16.5
2016-03-23RNN-CRF on 24(x3) MFSC17.3 Segmental Recurrent Neural Networks for End-to-end Speech Recognition
wsj WER eval92
DateAlgorithm% errorPaper / Source
2014-08-23CNN over RAW speech (wav)5.6 http://infoscience.epfl.ch/record/203464/files/Palaz_Idiap-RR-18-2014.pdf
2015-04-23TC-DNN-BLSTM-DNN3.47 Deep Recurrent Neural Networks for Acoustic Modelling
2015-08-23test-set on open vocabulary (i.e. harder), model = HMM-DNN + pNorm*3.63 http://www.danielpovey.com/files/2015_icassp_librispeech.pdf
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters3.6 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
wsj WER eval93
DateAlgorithm% errorPaper / Source
2015-08-23test-set on open vocabulary (i.e. harder), model = HMM-DNN + pNorm*5.66 http://www.danielpovey.com/files/2015_icassp_librispeech.pdf
2015-12-239-layer model w/ 2 layers of 2D-invariant convolution & 7 recurrent layers, w/ 68M parameters4.98 Deep Speech 2: End-to-End Speech Recognition in English and Mandarin

Music Information Retrieval

Instrumentals recognition

Instrumentals recognition in a representative musical dataset for Instrumentals playlist generation.

  • Experiments tested on SATIN database from Bayle et al. (2017).
  • The ratio Instrumentals/Songs (11%/89%) of SATIN is representative of real uneven musical databases.
  • The human performance is at 99% of correct instrumentals detection because there are known examples of possible confusion.
In [15]:
from data.acoustics import *
instrumentals_recognition.graphs()
In [16]:
HTML(instrumentals_recognition.tables())
Out[16]:
Precision of Instrumentals detection reached when tested on SATIN (Bayle et al. 2017)
DateAlgorithm% correctPaper / Source
2013-10-17Ghosal et al.17.3 A hierarchical approach for speech-instrumental-song classification | SpringerLink
2014-09-30SVMBFF12.5 On Evaluation Validity in Music Autotagging
2014-09-30VQMM29.8 On Evaluation Validity in Music Autotagging
2017-06-23Bayle et al.82.5 Revisiting Autotagging Toward Faultless Instrumental Playlists Generation

Image Generation

In [17]:
from data.generative import *
image_generation_metric.graph()
In [18]:
HTML(image_generation_metric.table())
Out[18]:
Generative models of CIFAR-10 images
DateAlgorithmModel
Entropy
Paper / Source
2014-10-30NICE4.48 NICE: Non-linear Independent Components Estimation
2015-02-16DRAW4.13 DRAW: A Recurrent Neural Network For Image Generation
2016-05-27Real NVP3.49 Density estimation using Real NVP
2016-05-27PixelRNN3.0 Density estimation using Real NVP
2016-06-15VAE with IAF3.11 Improved Variational Inference with Inverse Autoregressive Flow
2016-11-04PixelCNN++2.92 Forum | OpenReview (source code)

Language Modelling and Comprehension

Text compression is one way to see how well machine learning systems are able to model human language. Shannon's classic 1951 paper obtained an expiermental measure of human text compression performance at 0.6 - 1.3 bits per character: humans know, better than classic algorithms, what word is likely to come next in a piece of writing. More recent work (Moradi 1998, Cover 1978) provides estimates that are text-relative and in the 1.3 bits per character (and for some texts, much higher) range.

In [19]:
from data.language import *
ptperplexity.graph()
In [20]:
HTML(ptperplexity.table())
In [21]:
hp_compression.graph()
In [22]:
HTML(hp_compression.table())
Out[22]:
Hutter Prize (bits per character to encode English text)
DateAlgorithmModel
Entropy
Paper / Source
2011-06-28RNN1.6 http://www.cs.utoronto.ca/~ilya/pubs/2011/LANG-RNN.pdf
2013-08-04RNN, LSTM1.67 Generating Sequences With Recurrent Neural Networks
2015-02-15Gated Feedback RNN1.58 Gated Feedback Recurrent Neural Networks
2015-07-06Grid LSTM1.47 Grid Long Short-Term Memory
2016-07-12Recurrent Highway Networks1.32 Recurrent Highway Networks
2016-08-11RHN1.42 Recurrent Highway Networks
2016-09-06 Hierarchical Multiscale RNN1.32 Hierarchical Multiscale Recurrent Neural Networks
2016-09-27Hypernetworks1.39 HyperNetworks
2016-10-19Surprisal-Driven Feedback RNN1.37 Surprisal-Driven Feedback in Recurrent Networks
2016-10-31Surprisal-Driven Zoneout1.313 https://pdfs.semanticscholar.org/e9bc/83f9ff502bec9cffb750468f76fdfcf5dd05.pdf
2017-03-03Large RHN depth 101.27 Recurrent Highway Networks

LAMBADA is a challenging language modelling dataset in which the model has to predict a next word in a discourse, when that exact word has not occurred in the test. For instance, given a context like this:

He shook his head, took a step back and held his hands up as he tried to smile without losing a cigarette. “Yes you can,” Julia said in a reassuring voice. “I’ve already focused on my friend. You just have to click the shutter, on top, here.”

And a target sentence:

He nodded sheepishly, through his cigarette away and took the _________.

The task is to guess the target word "camera".

In [23]:
lambada.graph()
In [24]:
# Also consider adding the Microsoft Sentence Completion Challenge; see eg http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf table 7.4

Translation

Translation is a tricky problem to score, since ultimately it is human comprehension or judgement that determines whether a translation is accurate. Google for instance uses human evaluation to determine when their algorithms have improved. But that kind of measurement is expensive and difficult to replicate accurately, so automated scoring metrics are also widely used in the field. Perhaps the most common of these are BLEU scores for corpora that have extensive professional human translations, which forms the basis for the measurements included here:

In [25]:
en_fr_bleu.graph()
en_de_bleu.graph()
en_ro_bleu.graph()
In [26]:
HTML(translation.tables())
Out[26]:
news-test-2014 En-De BLEU
DateAlgorithmBLEUPaper / Source
2014-02-24PBMT20.7 Edinburgh’s phrase-based machine translation systems for WMT-14
2016-07-14NSE-NSE17.93 Neural Semantic Encoders
2016-07-23Deep-Att20.7 Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
2016-09-26GNMT+RL26.3 Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
2017-01-23MoE 204826.03 Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
2017-05-12ConvS2S ensemble26.36 Convolutional Sequence to Sequence Learning
news-test-2014 En-Fr BLEU
DateAlgorithmBLEUPaper / Source
2014-02-24PBMT37 Edinburgh’s phrase-based machine translation systems for WMT-14
2014-09-01RNN-search50*36.15 Neural Machine Translation by Jointly Learning to Align and Translate
2014-09-10LSTM34.81 Sequence to Sequence Learning with Neural Networks (algorithm from http://www.bioinf.jku.at/publications/older/2604.pdf)
2014-09-10SMT+LSTM536.5 Sequence to Sequence Learning with Neural Networks
2014-10-30LSTM6 + PosUnk37.5 Addressing the Rare Word Problem in Neural Machine Translation
2016-07-23Deep-Att + PosUnk39.2 Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
2016-09-26GNMT+RL39.92 Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
2017-01-23MoE 204840.56 Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
2017-05-12ConvS2S ensemble41.29 Convolutional Sequence to Sequence Learning
news-test-2016 En-Ro BLEU
DateAlgorithmBLEUPaper / Source
2016-07-11GRU BPE90k28.9 The QT21/HimL Combined Machine Translation System
2017-05-12ConvS2S BPE40k29.88 Convolutional Sequence to Sequence Learning

Conversation: Chatbots & Conversational Agents

Conversation is the classic AI progress measure! There is the Turing test, which involves a human judge trying to tell the difference between a humand and computer that they are chatting to online, and also easier variants of the Turing test in which the judge limits themselves to more casual, less probing conversation in various ways.

The Loebner Prize is an annual event that runs a somewhat easier version of the test. Since 2014, the event has also been giving standard-form tests to their entrants, and scoring the results (each question gets a plausible/semi-plausible/implausible rating). This metric is not stable, because the test questions have to change every year, they are somewhat indicative of progress. Ideally the event might apply each year's test questions to the most successful entrants from prior years. Here is an example from 2016:

In [27]:
loebner.graph()
In [28]:
HTML(loebner.table())
Out[28]:
The Loebner Prize scored selection answers
DateAlgorithm% correctPaper / Source
2014-11-15The Professor 201476.7 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2014-11-15Tutor 201480.83 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2014-11-15Uberbot 201481.67 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2014-11-15Izar 201488.3 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2014-11-15Misuku 201488.3 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2014-11-15Rose 201489.2 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2015-09-19Rose 201575 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2015-09-19Izar 201576.7 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2015-09-19Lisa 201580 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2015-09-19Mitsuku 201583.3 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2016-09-17Katie 201676.7 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2016-09-17Rose 201677.5 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2016-09-17Arckon 201677.5 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2016-09-17Tutor 201678.3 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize
2016-09-17Mitsuku 201690 AISB - The Society for the Study of Artificial Intelligence and Simulation of Behaviour - Loebner Prize

Reading Comprehension

The Facebook BABI 20 QA dataset is an example of a basic reading comprehension task. It has been solved with large training datasets (10,000 examples per task) but not with a smaller training dataset of 1,000 examples for each of the 20 categories of tasks. It involves learning to answer simple reasoning questions like these:

There are numerous other reading comprehension metrics that are in various ways harder than bAbi 20 QA. They are generally not solved, though progress is fairly promising.

In [29]:
for m in reading_comprehension.metrics: m.graphed = False
plot = bAbi1k.graph(keep=True, title="bAbi 20 QA reading comprehension", llabel="1k training examples")
bAbi10k.target = None
bAbi10k.graph(reuse=plot, llabel="10k training examples", fcol="#00a0a0", pcol="#a000a0")
bAbi10k.target = bAbi1k.target

Another reading comprehension dataset that has received significant recent attention is the Stanford Question Answering Dataset (SQuAD). The literature reports both F1 scores and exact match scores, though these are closely correlated:

In [30]:
from data.language import *
plot = squad_f1.graph(keep=True, title="Stanford Question Answering Dataset (SQuAD)", tcol="g", llabel="F1 score")
squad_em.graph(reuse=plot, llabel="Exact Match (EM)", fcol="#00a0a0", pcol="#a000a0", tcol="#00a0a0")
In [31]:
for m in reading_comprehension.metrics:
    if not m.graphed: m.graph()
In [32]:
HTML(reading_comprehension.tables())
Out[32]:
CNN Comprehension test
DateAlgorithm% correctPaper / Source
2015-06-10Attentive reader63.0 Teaching Machines to Read and Comprehend
2015-06-10Impatient reader63.8 Teaching Machines to Read and Comprehend
2016-03-04AS reader (greedy)74.8 Text Understanding with the Attention Sum Reader Network
2016-03-04AS reader (avg)75.4 Text Understanding with the Attention Sum Reader Network
2016-06-05GA reader77.4 Gated-Attention Readers for Text Comprehension
2016-06-07EpiReader74.0 Natural Language Comprehension with the EpiReader
2016-06-07AIA75.7 Iterative Alternating Neural Attention for Machine Reading
2016-08-04AoA reader74.4 Attention-over-Attention Neural Networks for Reading Comprehension
2016-08-08Attentive+relabling+ensemble77.6 A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
2016-09-17ReasoNet74.7 ReasoNet: Learning to Stop Reading in Machine Comprehension
2016-11-09AIA76.1 Iterative Alternating Neural Attention for Machine Reading
2016-12-01GA update L(w)77.9 Gated-Attention Readers for Text Comprehension
2017-03-07GA+MAGE (32)78.6 Linguistic Knowledge as Memory for Recurrent Neural Networks
Daily Mail Comprehension test
DateAlgorithm% correctPaper / Source
2015-06-10Impatient reader68.0 Teaching Machines to Read and Comprehend
2015-06-10Attentive reader69.0 Teaching Machines to Read and Comprehend
2016-03-04AS reader (avg)77.1 Text Understanding with the Attention Sum Reader Network
2016-03-04AS reader (greedy)77.7 Text Understanding with the Attention Sum Reader Network
2016-06-05GA reader78.1 Gated-Attention Readers for Text Comprehension
2016-08-08Attentive+relabling+ensemble79.2 A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
2016-09-17ReasoNet76.6 ReasoNet: Learning to Stop Reading in Machine Comprehension
2016-12-01GA update L(w)80.9 Gated-Attention Readers for Text Comprehension
Reading comprehension MCTest-160-all
DateAlgorithm% correctPaper / Source
2013-10-01SW+D+RTE69.16 MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
2015-07-26Narasimhan-model373.27 Machine Comprehension with Discourse Relations
2015-07-26Wang-et-al75.27 A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
2016-03-29Parallel-Hierarchical74.58 A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
Reading comprehension MCTest-500-all
DateAlgorithm% correctPaper / Source
2013-10-01SW+D+RTE63.33 MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
2015-07-26Narasimhan-model363.75 Machine Comprehension with Discourse Relations
2015-07-26LSSVM67.83 Learning Answer-Entailing Structures for Machine Comprehension
2015-07-26Wang-et-al69.94 A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
2016-03-29Parallel-Hierarchical71.0 A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
Stanford Question Answering Dataset EM test
DateAlgorithmScorePaper / Source
2016-11-04Dynamic Coattention Networks (single model)66.233 Dynamic Coattention Networks For Question Answering
2016-11-04Dynamic Coattention Networks (ensemble)71.625 Dynamic Coattention Networks For Question Answering
2016-11-07Match-LSTM+Ans-Ptr67.901 Machine Comprehension Using Match-LSTM and Answer Pointer
2016-11-29BiDAF (single model)68.478 Bidirectional Attention Flow for Machine Comprehension
2016-12-13MPM (single model)70.387 Multi-Perspective Context Matching for Machine Comprehension
2016-12-13MPM (ensemble)73.765 Multi-Perspective Context Matching for Machine Comprehension
2016-12-29FastQA68.436 Making Neural QA as Simple as Possible but not Simpler
2016-12-29FastQAExt70.849 Making Neural QA as Simple as Possible but not Simpler
2017-02-24BiDAF (ensemble)73.744 Bidirectional Attention Flow for Machine Comprehension
2017-03-08r-net (single model)74.614 https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf
2017-03-08r-net (ensemble)76.922 https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf
2017-03-31Document Reader (single model)70.733 Reading Wikipedia to Answer Open-Domain Questions
2017-04-20SEDT+BiDAF (single model)68.478 Structural Embedding of Syntactic Trees for Machine Comprehension
2017-04-20SEDT+BiDAF (ensemble)73.723 Structural Embedding of Syntactic Trees for Machine Comprehension
2017-04-24Ruminating Reader (single model)70.639 Ruminating Reader: Reasoning with Gated Multi-Hop Attention
2017-05-08Mnemonic reader (single model)69.863 Mnemonic Reader for Machine Comprehension
2017-05-08Mnemonic reader (ensemble)73.754 Mnemonic Reader for Machine Comprehension
2017-05-31jNet (single model)70.607 Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering
2017-05-31RaSoR (single model)70.849 Learning Recurrent Span Representations for Extractive Question Answering
2017-05-31jNet (ensemble)73.01 Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering
2017-06-20ReasoNet ensemble73.4 ReasoNet: Learning to Stop Reading in Machine Comprehension
2017-07-28MEMEN75.37 MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension
2017-08-16DCN+ (ensemble)78.706 The Stanford Question Answering Dataset
2017-08-21RMR (ensemble)77.678 Mnemonic Reader for Machine Comprehension
2017-09-20AIR-FusionNet (ensemble)78.842 The Stanford Question Answering Dataset
Stanford Question Answering Dataset F1 test
DateAlgorithmScorePaper / Source
2016-11-04Dynamic Coattention Networks (single model)75.896 Dynamic Coattention Networks For Question Answering
2016-11-04Dynamic Coattention Networks (ensemble)80.383 Dynamic Coattention Networks For Question Answering
2016-11-07Match-LSTM+Ans-Ptr77.022 Machine Comprehension Using Match-LSTM and Answer Pointer
2016-11-29BiDAF (single model)77.971 Bidirectional Attention Flow for Machine Comprehension
2016-12-13MPM (single model)78.784 Multi-Perspective Context Matching for Machine Comprehension
2016-12-13MPM (ensemble)81.257 Multi-Perspective Context Matching for Machine Comprehension
2016-12-29FastQA77.07 Making Neural QA as Simple as Possible but not Simpler
2016-12-29FastQAExt78.857 Making Neural QA as Simple as Possible but not Simpler
2017-02-24BiDAF (ensemble)81.525 Bidirectional Attention Flow for Machine Comprehension
2017-03-08r-net (single model)82.458 https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf
2017-03-08r-net (ensemble)84.006 https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf
2017-03-31Document Reader (single model)79.353 Reading Wikipedia to Answer Open-Domain Questions
2017-04-20SEDT+BiDAF (single model)77.971 Structural Embedding of Syntactic Trees for Machine Comprehension
2017-04-20SEDT+BiDAF (ensemble)81.53 Structural Embedding of Syntactic Trees for Machine Comprehension
2017-04-24Ruminating Reader (single model)79.821 Ruminating Reader: Reasoning with Gated Multi-Hop Attention
2017-05-08Mnemonic reader (single model)79.207 Mnemonic Reader for Machine Comprehension
2017-05-08Mnemonic reader (ensemble)81.863 Mnemonic Reader for Machine Comprehension
2017-05-31RaSoR (single model)78.741 Learning Recurrent Span Representations for Extractive Question Answering
2017-05-31jNet (single model)79.456 Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering
2017-05-31jNet (ensemble)81.517 Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering
2017-06-20ReasoNet ensemble82.9 ReasoNet: Learning to Stop Reading in Machine Comprehension
2017-07-28MEMEN82.66 MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension
2017-08-16DCN+ (ensemble)85.619 The Stanford Question Answering Dataset
2017-08-21RMR (ensemble)84.888 Mnemonic Reader for Machine Comprehension
2017-09-20AIR-FusionNet (ensemble)85.936 The Stanford Question Answering Dataset
bAbi 20 QA (10k training examples)
DateAlgorithm% correctPaper / Source
2015-02-19MemNN-AM+NG+NL (1k + strong supervision)93.3 Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
2015-03-31MemN2N-PE+LS+RN93.4 End-To-End Memory Networks
2016-01-05DNC96.2 https://www.gwern.net/docs/2016-graves.pdf
2016-06-30DMN+97.2 Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes
2016-09-27SDNC97.1 Query-Reduction Networks for Question Answering
2016-12-09QRN99.7 Query-Reduction Networks for Question Answering
2016-12-12EntNet99.5 Tracking the World State with Recurrent Entity Networks
bAbi 20 QA (1k training examples)
DateAlgorithm% correctPaper / Source
2015-03-31MemN2N-PE+LS+RN86.1 End-To-End Memory Networks
2015-06-24DMN93.6 Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
2016-12-09DMN+66.8 Query-Reduction Networks for Question Answering (source code) (algorithm from Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes)
2016-12-09QRN90.1 Query-Reduction Networks for Question Answering
2016-12-12EntNet89.1 Tracking the World State with Recurrent Entity Networks
2017-03-07GA+MAGE (16)91.3 Linguistic Knowledge as Memory for Recurrent Neural Networks
bAbi Children's Book comprehension CBtest CN
DateAlgorithm% correctPaper / Source
2016-03-04AS reader (greedy)67.5 Text Understanding with the Attention Sum Reader Network
2016-03-04AS reader (avg)68.9 Text Understanding with the Attention Sum Reader Network
2016-06-05GA reader69.4 Gated-Attention Readers for Text Comprehension
2016-06-07EpiReader67.4 Natural Language Comprehension with the EpiReader
2016-08-04AoA reader69.4 Attention-over-Attention Neural Networks for Reading Comprehension
2016-12-01GA +feature, fix L(w)70.7 Gated-Attention Readers for Text Comprehension
2016-12-01NSE71.9 Gated-Attention Readers for Text Comprehension (algorithm from Neural Semantic Encoders)
bAbi Children's Book comprehension CBtest NE
DateAlgorithm% correctPaper / Source
2016-03-04AS reader (avg)70.6 Text Understanding with the Attention Sum Reader Network
2016-03-04AS reader (greedy)71.0 Text Understanding with the Attention Sum Reader Network
2016-06-05GA reader71.9 Gated-Attention Readers for Text Comprehension
2016-06-07EpiReader69.7 Natural Language Comprehension with the EpiReader
2016-06-07AIA71.0 Iterative Alternating Neural Attention for Machine Reading
2016-06-07AIA72.0 Iterative Alternating Neural Attention for Machine Reading
2016-08-04AoA reader72.0 Attention-over-Attention Neural Networks for Reading Comprehension
2016-12-01NSE73.2 Gated-Attention Readers for Text Comprehension (algorithm from Neural Semantic Encoders)
2016-12-01GA +feature, fix L(w)74.9 Gated-Attention Readers for Text Comprehension

Scientific and Technical capabilities

Arguably reading and understanding scientific, technical, engineering and medical documents would be taxonomically related to general reading comprehension, but these technical tasks are probably much more difficult, and will certainly be solved with separate efforts. So we classify them separately for now. We also classify some of these problems as superintelligent, because only a tiny fraction of humans can read STEM papers, and only a miniscule fraction of humans are capable of reasonably comprehending STEM papers across a large range of fields.

In [33]:
from data.stem import *
Example Magic The Gathering (MTG) and Hearthstone (HS) cards Corresponding MTG card implementation in Java

Generating computer programs from specifications

A particularly interesting technical problem, which may be slightly harder than problems with very clear constraints like circuit design, is generating computer programs from natural language specifications (which will often contain ambiguities of various sorts). This is presently a very unsolved problem, though there is now at least one good metric / dataset for it, which is [Deepmind's "card2code" dataset](https://github.com/deepmind/card2code) of Magic the Gathering and Hearthstone cards, along with Java and Python implementations (respectively) of the logic on the cards. Shown below is a figure from [_Ling, et al. 2016_](https://arxiv.org/abs/1603.06744v1) with their Latent Predictor Networks generating part of the code output for a Hearthstone card:
In [34]:
card2code_hs_acc.graph()
In [35]:
HTML(card2code_hs_acc.table())

Answering Science Exam Questions

Science exam question answering is a multifaceted task that pushes the limits of artificial intelligence. As indicated by the example questions pictured, successful science exam QA requires natural language understanding, reasoning, situational modeling, and commonsense knowledge; a challenge problem for which information-retrieval methods alone are not sufficient to earn a "passing" grade.

The [AI2 Science Questions](http://data.allenai.org/ai2-science-questions/) dataset provided by the Allen Institute for Artificial Intelligence (AI2) is a freely available collection of 5,059 real science exam questions derived from a variety of regional and state science exams. Project Aristo at AI2 is focused on the task of science question answering – the Aristo system is composed of a suite of various knowledge extraction methods, diagram processing tools, and solvers. As a reference point, the system currently achieves the following scores on these sets of non-diagram multiple choice (NDMC) and diagram multiple choice (DMC) science questions at two different grade levels. Allen Institute staff [claim these states of the art](https://github.com/AI-metrics/AI-metrics/pull/60) for Aristo [Scores are listed as "Subset (Train/Dev/Test)"]:

  • Elementary NDMC (63.2/60.2/61.3)
  • Elementary DMC (41.8/41.3/36.3)
  • Middle School NDMC (55.5/57.6/57.9)
  • Middle School DMC (38.4/35.3/34.3)

Another science question answering dataset that has been studied in the literature is based specifically on New York Regents 4th grade science exam tests:

In [36]:
ny_4_science.graph()

Learning to Learn

Generalisation and Transfer Learning

ML systems are making strong progress at solving specific problems with sufficient training data. But we know that humans are capable of transfer learning -- applying things they've learned from one context, with appropriate variation, to another context. Humans are also very general; rather than just being taught to perform specific tasks, a single agent is able to do a very wide range of tasks, learning new things or not as required by the situation.

In [38]:
generalisation = Problem("Building systems that solve a wide range of diverse problems, rather than just specific ones")
generalisation.metric("Solve all other solved problems in this document, with a single system", solved=False)

transfer_learning = Problem("Transfer learning: apply relevant knowledge from a prior setting to a new slightly different one")
arcade_transfer = Problem("Transfer of learning within simple arcade game paradigms")

generalisation.add_subproblem(transfer_learning)
transfer_learning.add_subproblem(arcade_transfer)

# These will need to be specified a bit more clearly to be proper metrics, eg "play galaga well having trained on Xenon 2" or whatever
# the literature has settled on
# arcade_transfer.metric("Transfer learning of platform games")
# arcade_transfer.metric("Transfer learning of vertical shooter games")
# arcade_transfer.metric("Transfer from a few arcade games to all of them")

one_shot_learning = Problem("One shot learning: ingest important truths from a single example", ["agi", "world-modelling"])

uncertain_prediction = Problem("Correctly identify when an answer to a classification problem is uncertain")
uncertain_prediction.notes = "Humans can usually tell when they don't know something. Present ML classifiers do not have this ability."

interleaved_learning = Problem("Learn a several tasks without undermining performance on a first task, avoiding catastrophic forgetting", url="https://arxiv.org/abs/1612.00796")

Safety and Security Problems

The notion of "safety" for AI and ML systems can encompass many things. In some cases it's about ensuring that the system meets various sorts of constraints, either in general or for specifically safety-critical purposes, such as correct detection of pedestrians for self driving cars.

"Adversarial Examples" and manipulation of ML classifiers

In [39]:
adversarial_examples = Problem("Resistance to adversarial examples", ["safety", "agi", "security"], url="https://arxiv.org/abs/1312.6199")

adversarial_examples.notes = """
We know that humans have significant resistance to adversarial examples.  Although methods like camouflage sometimes
work to fool us into thinking one thing is another, those
"""

Safety of Reinforcement Learning Agents and similar systems

In [40]:
# This section is essentially on teaching ML systems ethics and morality. Amodei et al call this "scaleable supervision".
scalable_supervision = Problem("Scalable supervision of a learning system", ["safety", "agi"], url="https://arxiv.org/abs/1606.06565")
cirl = Problem("Cooperative inverse reinforcement learning of objective functions", ["safety", "agi"], url="https://arxiv.org/abs/1606.03137")
cirl.notes = "This is tagged agi because most humans are able to learn ethics from their surrounding community"
# Co-operative inverse reinforcement learning might be equivalent to solving scalable supervision, or there might other subproblems here
scalable_supervision.add_subproblem(cirl)

safe_exploration = Problem("Safe exploration", ["safety", "agi", "world-modelling"], url="https://arxiv.org/abs/1606.06565")
safe_exploration.notes = """
Sometimes, even doing something once is catastrophic. In such situations, how can an RL agent or some other AI system
learn about the catastrophic consequences without even taking the action once? This is an ability that most humans acquire
at some point between childhood and adolescence.
"""
# safe exploration may be related to one shot learning, though it's probably too early to mark that so clearly.

The work by Saunders et al. (2017) is an example of attempting to deal with the safe exploration problem by human-in-the-loop supervision. Without this oversight, a reinforcement learning system may engage in "reward hacking" in some Atari games. For instance in the Atari 2600 Road Runner game, an RL agent may deliberately kill itself to stay on level 1, because it can get more points on that level than it can on level 2 (particularly when it has not yet learned to master level 2). Human oversight overcomes this problem:

In [41]:
# hiddencode
HTML("""
<video id="video" width="80%" height="%45" controls poster="images/road-runner-poster.jpg" onclick="this.paused?this.play():this.pause();">
   <source src="video/saunders-roadrunner.mp4" type="video/mp4">
</video>

<div style="text-align:right; margin-right:20%">
  Futher videos from that project are on <a href="https://www.youtube.com/playlist?list=PLjs9WCnnR7PCn_Kzs2-1afCsnsBENWqor">on YouTube</a>.
</div>
""")
Out[41]:
Futher videos from that project are on on YouTube.
In [42]:
avoiding_reward_hacking = Problem("Avoiding reward hacking", ["safety"], url="https://arxiv.org/abs/1606.06565")
avoiding_reward_hacking.notes = """
Humans have only partial resistance to reward hacking.
Addiction seems to be one failure to exhibit this resistance.
Avoiding learning something because it might make us feel bad, or even building elaborate systems of self-deception, are also sometimes
seen in humans. So this problem is not tagged "agi".
"""

avoiding_side_effects = Problem("Avoiding undesirable side effects", ["safety"], url="https://arxiv.org/abs/1606.06565")
avoiding_side_effects.nodes = """
Many important constraints on good behaviour will not be explicitly
encoded in goal specification, either because they are too hard to capture
or simply because there are so many of them and they are hard to enumerate
"""

robustness_to_distributional_change = Problem("Function correctly in novel environments (robustness to distributional change)", ["safety", "agi"], url="https://arxiv.org/abs/1606.06565")

copy_bounding = Problem("Know how to prevent an autonomous AI agent from reproducing itself an unbounded number of times", ["safety"])

safety = Problem("Know how to build general AI agents that will behave as expected")
safety.add_subproblem(adversarial_examples)
safety.add_subproblem(scalable_supervision)
safety.add_subproblem(safe_exploration)
safety.add_subproblem(avoiding_reward_hacking)
safety.add_subproblem(avoiding_side_effects)
safety.add_subproblem(robustness_to_distributional_change)
safety.add_subproblem(copy_bounding)

Automated Hacking Systems

Automated tools are becoming increasingly effective both for offensive and defensive computer security purposes.

On the defensive side, fuzzers and static analysis tools have been used for some time by well-resourced software development teams to reduce the number of vulnerabilities in the code they ship.

Assisting both offense and defense, DARPA has recently started running the Cyber Grand Challenge contest to measure and improve the ability of agents to either break into systems or defend those same systems against vulnerabilities. It isn't necessarily clear how such initiatives would change the security of various systems.

This section includes some clear AI problems (like learning to find exploitable vulnerabilities in code) and some less pure AI problems, such as ensuring that defensive versions of this technology (whether in the form of fuzzers, IPSes, or other things) are deployed on all critical systems.

In [43]:
# It isn't totally clear whether having automated systems be good at finding bugs in and of itself will make the deployment
# of AI technologies safer or less safe, so we tag this both with "safety" and as a potentialy "unsafe" development
bug_finding = Problem("Detect security-related bugs in codebases", ["safety", "security", "unsafe"])

# However what
defensive_deployment = Problem("Deploy automated defensive security tools to protect valuable systems")
defensive_deployment.notes = """
It is clearly important is ensuring that the state of the art in defensive technology is deployed everywhere
that matters, including systems that perform important functions or have sensitive data on them (smartphones, for instance), and 
systems that have signifcant computational resources. This "Problem" isn't 
"""

Pedestrian Detection

Detecting pedestrians from images or video is a specific image classification problem that has received a lot of attention because of its importance for self-driving vehicles. Many metrics in this space are based on the Caltech pedestrians toolkit, thought the KITTI Vision Benchmark goes beyond that to include cars and cyclists in addition to pedestrians. We may want to write scrapers for Caltech's published results and KITTI's live results table.

In [44]:
pedestrian_detection = Problem("Pedestrian, bicycle & obstacle detection", ["safety", "vision"])
image_classification.add_subproblem(pedestrian_detection)

# TODO: import data from these pedestrian datasets/metrics.
# performance on them is a frontier of miss rate / false positive tradeoffs, 
# so we'll need to chose how to handle that as a scale

# http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/rocs/UsaTestRocReasonable.pdf
pedestrian_detection.metric("Caltech Pedestrians USA", url="http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/")
# http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/rocs/InriaTestRocReasonable.pdf
pedestrian_detection.metric("INRIA persons", url="http://pascal.inrialpes.fr/data/human/")
# http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/rocs/ETHRocReasonable.pdf
pedestrian_detection.metric("ETH Pedestrian", url="http://www.vision.ee.ethz.ch/~aess/dataset/")
# http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/rocs/TudBrusselsRocReasonable.pdf
pedestrian_detection.metric("TUD-Brussels Pedestrian", url="http://www.d2.mpi-inf.mpg.de/tud-brussels")
# http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/rocs/DaimlerRocReasonable.pdf
pedestrian_detection.metric("Damiler Pedestrian", url="http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Daimler_Mono_Ped__Detection_Be/daimler_mono_ped__detection_be.html")
Out[44]:
Metric("Damiler Pedestrian")

Explainability and Interpretability

In [45]:
explainability = Problem("Modify arbitrary ML systems in order to be able to provide comprehensible human explanations of their decisions")

statistical_explainability = Problem("Provide mathematical or technical explanations of decisions from classifiers")
statistical_explainability.notes = """
Providing explanations with techniques such as monte carlo analysis may in general
be easier than providing robust ones in natural language (since those may or may not
exist in all cases)
"""

explainability.add_subproblem(statistical_explainability)

Fairness and Debiasing

Biased decision making is a problem exhibited both by very simple machine learning classifiers as well as much more complicated ones. Large drivers of this problem include omitted-variable bias, reliance on inherently biased data sources for training data, attempts to make predictions from insufficient quantities of data, and deploying systems that create real-world incentives that change the behaviour they were measuring (see Goodhart's Law).

These problems are severe and widespread in the deployment of scoring and machine learning systems in contexts that include criminal justice, education policy, insurance and lending.

In [46]:
avoiding_bias = Problem("Build systems which can recognise and avoid biases decision making", ["safety"])

avoiding_bias.notes = """
Legally institutionalised protected categories represent only the most extreme and socially recognised
forms of biased decisionmaking. Attentive human decision makers are sometime capable of recognising
and avoiding many more subtle biases. This problem tracks AI systems' ability to do likewise.
"""

avoid_classification_biases = Problem("Train ML classifiers in a manner that corrects for the impact of omitted-variable bias on certain groups", solved=True)
avoid_classification_biases.notes = '''
Several standards are available for avoiding classification biases.

They include holding false-positive / false adverse prediction rates constant across protected categories (which roughly maps 
to "equal opportunity"), holding both false-positive and false-negative rates equal ("demographic parity"), and ensuring
that the fraction of each protected group that receives a given prediction is constant across all groups 
(roughly equivalent to "affirmative action").'''

avoid_classification_biases.metric("Adjust prediction models to have constant false-positive rates", url="https://arxiv.org/abs/1610.02413", solved=True)
avoid_classification_biases.metric("Adjust prediction models tos have constant false-positive and -negative rates", url="http://www.jmlr.org/proceedings/papers/v28/zemel13.pdf", solved=True)
Out[46]:
Metric("Adjust prediction models tos have constant false-positive and -negative rates")

Privacy

Many of the interesting privacy problems that will arise from AI and machine learning will come from choices about the applications of the technology, rather than a lack of algorithmic progress within the field. But there are some exceptions, which we will track here.

In [47]:
private_training = Problem("Train machine learning systems on private user data, without transferring sensitive facts into the model")
private_training.metric("Federated Learning (distributed training with thresholded updates to models)", solved=True, url="https://arxiv.org/abs/1602.05629")

avoid_privacy_bias = Problem("Fairness in machine learning towards people with a preference for privacy")
avoid_privacy_bias.notes = """
People who care strongly about their own privacy take many measures to obfuscate their tracks through
technological society, including using fictitious names, email addresses, etc in their routine dealings with
corporations, installing software to block or send inacurate data to online trackers. Like many other groups,
these people may be subject to unfairly adverse algorithmic decisionmaking. Treating them as a protected
group will be more difficult, because they are in many respects harder to identify.
"""
In [48]:
# hiddencode
def counts():
    print ("Included thus far:")
    print ("=================================")
    print (len(problems), "problems")
    print (len(metrics), "metrics", len([m for m in metrics.values() if m.solved]), "solved")
    print (len(measurements), "measurements")
    print (len([p for p in problems.values() if not p.metrics]), "problems which do not yet have any metrics (either not in this notebook, or none in the open literature)")
    print ("=================================\n")
    print ("Problems by Type:")
    print ("=================================")

    by_attr = {}
    solved_by_attr = {}
    for a in all_attributes:
        print (a, len([p for p in problems.values() if a in p.attributes]), )
        print ("solved:", len([p for p in problems.values() if p.solved and a in p.attributes]))

    print ("\nMetrics by Type:")
    print ("=================================")

    by_attr = {}
    solved_by_attr = {}
    for a in all_attributes:
        print (a, sum([len(p.metrics) for p in problems.values() if a in p.attributes]), )
        print ("solved:", sum([len([m for m in p.metrics if m.solved]) for p in problems.values() if a in p.attributes]))
    print ("=================================\n")
In [49]:
# hiddencode
def list_problems():
    for p in sorted(problems.values(), key=lambda x: x.attributes):
        if not p.superproblems:
            p.print_structure()
            print("")
In [50]:
# hiddencode
def venn_report():
    print("Sample of problems characterized thus far:")
    lang = set(p for p in problems.values() if "language" in p.attributes)
    world = set(p for p in problems.values() if "world-modelling" in p.attributes)
    vision = set(p for p in problems.values() if "vision" in p.attributes)

    from matplotlib_venn import venn3
    venn3((lang, world, vision), ('Language Problems', 'World-Modelling Problems', 'Vision Problems'))
    plt.show()
In [51]:
# hiddencode
def graphs():
    print("Graphs of progress:")
    for name, metric in metrics.items():
        if len(metric.measures) > 2 and not metric.graphed:
            print(name, "({0} measurements)".format(len(metric.measures)))
            metric.graph()
    plt.show()
                
graphs()
Graphs of progress:

Taxonomy and recorded progress to date

In [52]:
list_problems()
Problem(Train machine learning systems on private user data, without transferring sensitive facts into the model)
    Metric(Federated Learning (distributed training with thresholded updates to models))SOLVED

Problem(Correctly identify when an answer to a classification problem is uncertain)

Problem(Deploy automated defensive security tools to protect valuable systems)

Problem(Learn a several tasks without undermining performance on a first task, avoiding catastrophic forgetting)

Problem(Train ML classifiers in a manner that corrects for the impact of omitted-variable bias on certain groups)
    Metric(Adjust prediction models to have constant false-positive rates)SOLVED
    Metric(Adjust prediction models tos have constant false-positive and -negative rates)SOLVED

Problem(Fairness in machine learning towards people with a preference for privacy)

Problem(Building systems that solve a wide range of diverse problems, rather than just specific ones)
    Metric(Solve all other solved problems in this document, with a single system)?
    Problem(Transfer learning: apply relevant knowledge from a prior setting to a new slightly different one)
        Problem(Transfer of learning within simple arcade game paradigms)

Problem(Modify arbitrary ML systems in order to be able to provide comprehensible human explanations of their decisions)
    Problem(Provide mathematical or technical explanations of decisions from classifiers)

Problem(Know how to build general AI agents that will behave as expected)
    Problem(Resistance to adversarial examples)
    Problem(Scalable supervision of a learning system)
        Problem(Cooperative inverse reinforcement learning of objective functions)
    Problem(Safe exploration)
    Problem(Avoiding reward hacking)
    Problem(Avoiding undesirable side effects)
    Problem(Function correctly in novel environments (robustness to distributional change))
    Problem(Know how to prevent an autonomous AI agent from reproducing itself an unbounded number of times)

Problem(Abstract strategy games)
    Problem(Playing abstract games with extensive hints)
        Metric(Computer Chess)                                      SOLVED
        Metric(Computer Go)                                         SOLVED
    Problem(Superhuman mastery of arbitrary abstract strategy games)
        Metric(mastering chess)                                     ?
    Problem(Learning the rules of complex strategy games from examples)
        Metric(learning chess)                                      ?
        Metric(learning go)                                         ?
    Problem(Play an arbitrary abstract game, first learning the rules)

Problem(Translation between human langauges)
    Metric(news-test-2014 En-Fr BLEU)                           not solved
    Metric(news-test-2014 En-De BLEU)                           not solved
    Metric(news-test-2016 En-Ro BLEU)                           not solved

Problem(Conduct arbitrary sustained, probing conversation)
    Problem(Turing test for casual conversation)
        Metric(The Loebner Prize scored selection answers)          not solved
    Problem(Language comprehension and question-answering)
        Metric(bAbi 20 QA (10k training examples))                  SOLVED
        Metric(bAbi 20 QA (1k training examples))                   not solved
        Metric(Reading comprehension MCTest-160-all)                ?
        Metric(Reading comprehension MCTest-500-all)                ?
        Metric(bAbi Children's Book comprehension CBtest NE)        not solved
        Metric(bAbi Children's Book comprehension CBtest CN)        not solved
        Metric(CNN Comprehension test)                              ?
        Metric(Daily Mail Comprehension test)                       ?
        Metric(Stanford Question Answering Dataset EM test)         not solved
        Metric(Stanford Question Answering Dataset F1 test)         not solved

Problem(Vision)
    Problem(Image classification)
        Metric(Imagenet Image Recognition)                          SOLVED
        Metric(MSRC-21 image semantic labelling (per-class))        ?
        Metric(MSRC-21 image semantic labelling (per-pixel))        ?
        Metric(CIFAR-100 Image Recognition)                         ?
        Metric(CIFAR-10 Image Recognition)                          SOLVED
        Metric(Street View House Numbers (SVHN))                    SOLVED
        Metric(MNIST handwritten digit recognition)                 SOLVED
        Metric(STL-10 Image Recognition)                            ?
        Metric(Leeds Sport Poses)                                   ?
        Problem(Image comprehension)
            Metric(COCO Visual Question Answering (VQA) real images 1.0 open ended)not solved
            Metric(COCO Visual Question Answering (VQA) real images 1.0 multiple choice)?
            Metric(COCO Visual Question Answering (VQA) abstract images 1.0 open ended)not solved
            Metric(COCO Visual Question Answering (VQA) abstract 1.0 multiple choice)?
            Metric(Toronto COCO-QA)                                     ?
            Metric(DAQUAR)                                              not solved
            Metric(Visual Genome (pairs))                               ?
            Metric(Visual Genome (subjects))                            ?
            Metric(Visual7W)                                            ?
            Metric(FM-IQA)                                              ?
            Metric(Visual Madlibs)                                      ?
            Metric(COCO Visual Question Answering (VQA) real images 2.0 open ended)?
        Problem(Pedestrian, bicycle & obstacle detection)
            Metric(Caltech Pedestrians USA)                             ?
            Metric(INRIA persons)                                       ?
            Metric(ETH Pedestrian)                                      ?
            Metric(TUD-Brussels Pedestrian)                             ?
            Metric(Damiler Pedestrian)                                  ?
    Problem(Recognise events in videos)
        Metric(YouTube-8M video labelling)                          ?

Problem(One shot learning: ingest important truths from a single example)

Problem(Detection of Instrumentals musical tracks)
    Metric(Precision of Instrumentals detection reached when tested on SATIN (Bayle et al. 2017))not solved

Problem(Speech Recognition)
    Metric(Word error rate on Switchboard trained against the Hub5'00 dataset)SOLVED
    Metric(librispeech WER testclean)                           SOLVED
    Metric(librispeech WER testother)                           SOLVED
    Metric(wsj WER eval92)                                      SOLVED
    Metric(wsj WER eval93)                                      SOLVED
    Metric(swb_hub_500 WER fullSWBCH)                           ?
    Metric(fisher WER)                                          ?
    Metric(chime clean)                                         ?
    Metric(chime real)                                          ?
    Metric(timit PER)                                           ?

Problem(Accurate modelling of human language.)
    Metric(Penn Treebank (Perplexity when parsing English sentences))?
    Metric(Hutter Prize (bits per character to encode English text))SOLVED
    Metric(LAMBADA prediction of words in discourse)            not solved

Problem(Given an arbitrary technical problem, solve it as well as a typical professional in that field)
    Problem(Writing software from specifications)
        Metric(Card2Code)                                           ?
    Problem(Solve vaguely or under-constrained technical problems)
        Problem(Read a scientific or technical paper, and comprehend its contents)
            Problem(Extract major numerical results or progress claims from a STEM paper)
                Metric(Automatically find new relevant ML results on arXiv) ?
        Problem(Write computer programs from specifications)
            Metric(Card2Code MTG accuracy)                              not solved
            Metric(Card2Code Hearthstone accuracy)                      not solved
            Problem(Parse and implement complex conditional expressions)
        Problem(Answering Science Exam Questions)
            Metric(NY Regents 4th Grade Science Exams)                  not solved
            Metric(Elementery Non-Diagram Multiple Choice (NDMC) Science Exam accuracy)not solved
            Metric(Elementery Diagram Multiple Choice (DMC) Science Exam accuracy)not solved
            Metric(Middle School Non-Diagram Multiple Choice (NDMC) Science Exam accuracy)not solved
            Metric(Middle School Diagram Multiple Choice (DMC) Science Exam accuracy)not solved
    Problem(Solve technical problems with clear constraints (proofs, circuit design, aerofoil design, etc))
        Problem(Given examples of proofs, find correct proofs of simple mathematical theorems)
            Metric(HolStep)                                             ?
        Problem(Given desired circuit characteristics, and many examples, design new circuits to spec)

Problem(Build systems which can recognise and avoid biases decision making)

Problem(Detect security-related bugs in codebases)

Problem(Be able to generate complex scene e.g. a baboon receiving their degree at convocatoin.)
    Problem(Drawing pictures)
        Metric(Generative models of CIFAR-10 images)                ?

Problem(Play real-time computer & video games)
    Problem(Games that require inventing novel language, forms of speech, or communication)
        Problem(Games that require both understanding and speaking a language)
            Metric(Starcraft)                                           ?
            Problem(Games that require language comprehension)
    Problem(Simple video games)
        Metric(Atari 2600 Alien)                                    not solved
        Metric(Atari 2600 Amidar)                                   SOLVED
        Metric(Atari 2600 Assault)                                  SOLVED
        Metric(Atari 2600 Asterix)                                  SOLVED
        Metric(Atari 2600 Asteroids)                                not solved
        Metric(Atari 2600 Atlantis)                                 SOLVED
        Metric(Atari 2600 Bank Heist)                               SOLVED
        Metric(Atari 2600 Battle Zone)                              not solved
        Metric(Atari 2600 Beam Rider)                               SOLVED
        Metric(Atari 2600 Berzerk)                                  SOLVED
        Metric(Atari 2600 Bowling)                                  not solved
        Metric(Atari 2600 Boxing)                                   SOLVED
        Metric(Atari 2600 Breakout)                                 SOLVED
        Metric(Atari 2600 Centipede)                                SOLVED
        Metric(Atari 2600 Chopper Command)                          SOLVED
        Metric(Atari 2600 Crazy Climber)                            SOLVED
        Metric(Atari 2600 Demon Attack)                             SOLVED
        Metric(Atari 2600 Double Dunk)                              SOLVED
        Metric(Atari 2600 Enduro)                                   SOLVED
        Metric(Atari 2600 Fishing Derby)                            SOLVED
        Metric(Atari 2600 Freeway)                                  SOLVED
        Metric(Atari 2600 Frostbite)                                SOLVED
        Metric(Atari 2600 Gopher)                                   SOLVED
        Metric(Atari 2600 Gravitar)                                 not solved
        Metric(Atari 2600 HERO)                                     SOLVED
        Metric(Atari 2600 Ice Hockey)                               SOLVED
        Metric(Atari 2600 James Bond)                               SOLVED
        Metric(Atari 2600 Kangaroo)                                 SOLVED
        Metric(Atari 2600 Krull)                                    SOLVED
        Metric(Atari 2600 Kung-Fu Master)                           SOLVED
        Metric(Atari 2600 Montezuma's Revenge)                      not solved
        Metric(Atari 2600 Ms. Pacman)                               not solved
        Metric(Atari 2600 Name This Game)                           SOLVED
        Metric(Atari 2600 Pong)                                     SOLVED
        Metric(Atari 2600 Private Eye)                              not solved
        Metric(Atari 2600 Q*Bert)                                   SOLVED
        Metric(Atari 2600 River Raid)                               SOLVED
        Metric(Atari 2600 Road Runner)                              SOLVED
        Metric(Atari 2600 Robotank)                                 SOLVED
        Metric(Atari 2600 Seaquest)                                 SOLVED
        Metric(Atari 2600 Space Invaders)                           SOLVED
        Metric(Atari 2600 Star Gunner)                              SOLVED
        Metric(Atari 2600 Tennis)                                   SOLVED
        Metric(Atari 2600 Time Pilot)                               SOLVED
        Metric(Atari 2600 Tutankham)                                SOLVED
        Metric(Atari 2600 Up and Down)                              SOLVED
        Metric(Atari 2600 Venture)                                  SOLVED
        Metric(Atari 2600 Video Pinball)                            SOLVED
        Metric(Atari 2600 Wizard of Wor)                            SOLVED
        Metric(Atari 2600 Zaxxon)                                   SOLVED
        Metric(Atari 2600 Phoenix)                                  SOLVED
        Metric(Atari 2600 Pit Fall)                                 not solved
        Metric(Atari 2600 Skiing)                                   not solved
        Metric(Atari 2600 Solaris)                                  not solved
        Metric(Atari 2600 Yars Revenge)                             SOLVED
        Metric(Atari 2600 Defender)                                 SOLVED
        Metric(Atari 2600 Pitfall!)                                 not solved
        Metric(Atari 2600 Surround)                                 SOLVED

Problems and Metrics by category

In [53]:
counts()
venn_report()
Included thus far:
=================================
58 problems
134 metrics 62 solved
1702 measurements
34 problems which do not yet have any metrics (either not in this notebook, or none in the open literature)
=================================

Problems by Type:
=================================
agi 27
solved: 2
language 12
solved: 1
communication 2
solved: 0
world-modelling 13
solved: 0
unsafe 1
solved: 0
qa 1
solved: 0
math 2
solved: 0
safety 11
solved: 0
languge 1
solved: 0
abstract-games 5
solved: 2
science 1
solved: 0
security 2
solved: 0
super 2
solved: 0
vision 6
solved: 0
realtime-games 2
solved: 0

Metrics by Type:
=================================
agi 112
solved: 57
language 41
solved: 7
communication 1
solved: 0
world-modelling 82
solved: 47
unsafe 0
solved: 0
qa 5
solved: 0
math 1
solved: 0
safety 5
solved: 0
languge 0
solved: 0
abstract-games 5
solved: 2
science 5
solved: 0
security 0
solved: 0
super 1
solved: 0
vision 27
solved: 4
realtime-games 58
solved: 46
=================================

Sample of problems characterized thus far:

How to contribute to this notebook

This notebook is an open source, community effort. It lives on Github at https://github.com/AI-metrics/AI-metrics. You can help by adding new metrics, data and problems to it! If you're feeling ambitious you can also improve its semantics or build new analyses into it. Here are some high level tips on how to do that.

0. The easiest way -- just hit the edit button

Next to every table of results (not yet next to the graphs) you'll find an "Add/edit data on Github" link. You can just click it, and you should get a link to the Github's online editor that should make it easy to add new results, or fix existing ones, and send us a pull request. For best results, make sure you're logged in to Github

1. If you're comfortable with git and Jupyter Notebooks, or are happy to learn

If you're interested in making more extensive changes to the Notebook, and you've already worked a lot with git and IPython/Jupyter Notebooks, you can run and edit copy locally. This is a fairly involved process (Jupyter Notebook and git interact in a somewhat complicated way) but here's a quick list of things that should mostly work:

  1. Install Jupyter Notebook and git.
    • On an Ubuntu or Debian system, you can do:
      sudo apt-get install git
      sudo apt-get install ipython-notebook || sudo apt-get install jupyter-notebook || sudo apt-get install python-notebook
    • Make sure you have IPython Notebook version 3 or higher. If your OS doesn't provide it, you might need to enable backports, or use pip to install it.
  2. Install this notebook's Python dependencies:
    • On Ubuntu or Debian, do:
          sudo apt-get install python-{cssselect,lxml,matplotlib{,-venn},numpy,requests,seaborn}
    • On other systems, use your native OS packages, or use pip:
          pip install cssselect lxml matplotlib{,-venn} numpy requests seaborn
  3. Fork our repo on github: https://github.com/AI-metrics/AI-metrics#fork-destination-box
  4. Clone the repo on your machine, and cd into the directory it's using
  5. Configure your copy of git to use IPython Notebook merge filters to prevent conflicts when multiple people edit the Notebook simultaneously. You can do that with these two commands in the cloned repo:
    git config --file .gitconfig filter.clean_ipynb.clean $PWD/ipynb_drop_output
    git config --file .gitconfig filter.clean_ipynb.smudge cat
  6. Run Jupyter Notebok in the project directory (the command may be ipython notebook, jupyter notebook, jupyter-notebook, or python notebook depending on your system), then go to localhost:8888 and edit the Notebook to your heart's content

  7. Save and commit your work (git commit -a -m "DESCRIPTION OF WHAT YOU CHANGED")

  8. Push it to your remote repo
  9. Send us a pull request!

Notes on importing data

  • Each .measure() call is a data point of a specific algorithm on a specific metric/dataset. Thus one paper will often produce multiple measurements on multiple metrics. It's most important to enter results that were at or near the frontier of best performance on the date they were published. This isn't a strict requirement, though; it's nice to have a sense of the performance of the field, or of algorithms that are otherwise notable even if they aren't the frontier for a sepcific problem.
  • When multiple revisions of a paper (typically on arXiv) have the same results on some metric, use the date of the first version (the CBTest results in this paper are an example)
  • When subsequent revisions of a paper improve on the original results (example), use the date and scores of the first results, or if each revision is interesting / on the frontier of best performance, include each paper
    • We didn't check this carefully for our first ~100 measurement data points :(. In order to denote when we've checked which revision of an arXiv preprint first published a result, cite the specific version (https://arxiv.org/abs/1606.01549v3 rather than https://arxiv.org/abs/1606.01549). That way, we can see which previous entries should be double-checked for this form of inaccuracy.
  • Where possible, use a clear short name or acronym for each algorithm. The full paper name can go in the papername field (and is auto-populated for some papers). When matplotlib 2.1 ships we may be able to get nice rollovers with metadata like this. Or perhaps we can switch to D3 to get that type of interactivity.

What to work on

  • If you know of ML datasets/metrics that aren't included yet, add them
  • If there are papers with interesting results for metrics that aren't included, add them
  • If you know of important problems that humans can solve, and machine learning systems may or may not yet be able to, and they're missing from our taxonomy, you can propose them
  • Look at our Github issue list, perhaps starting with those tagged as good volunteer tasks.

Building on this data

If you want to use this data for some purpose that is beyond the scope of this Notebook, all of the raw data exported as a JSON blob. This is not yet a stable API, but you can get the data at:

https://raw.githubusercontent.com/AI-metrics/AI-metrics/master/export-api/v01/progress.json

License

Creative Commons License

Much of this Notebook is uncopyrightable data. The copyrightable portions of this Notebook that are written by EFF and other Github contributors are licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. Illustrations from datasets and text written by other parties remain copyrighted by their respective owners, if any, and may be subject to different licenses.

The source code is also dual-licensed under the GNU General Public License, version 2 or greater.

How to cite this document

In academic contexts, you can cite this document as: Peter Eckersley, Yomna Nasser _et al._, EFF AI Progress Measurement Project, (2017-) https://eff.org/ai/metrics, accessed on 2017-09-09, or the equivalent in the format you are working in.

If you would like to deep-link an exact version of the text of the Notebook for archival or historical purposes, you can do that using the Internet Archive or Github. In addition to keeping a record of changes, Github will render a specific version of the Notebook using URLs like this one: https://github.com/AI-metrics/AI-metrics/blob/008993c84188094ba804882f65815c7e1cfc4d0e/AI-progress-metrics.ipynb

In [54]:
# hiddencode

def export_json(indent=False, default_name="export-api/v01/progress.json"):
    """Export all the data in here to a JSON file! Default name: progress.json."""
    output = {'problems':[]}
    for problem in problems:
        problem = problems[problem]
        problem_data = {}        
        for problem_attr in problem.__dict__:
            if problem_attr in ['subproblems', 'superproblems']:
                problem_data[problem_attr] = map(lambda x: x.name, getattr(problem, problem_attr))
            elif problem_attr != 'metrics':
                problem_data[problem_attr] = getattr(problem, problem_attr)
            elif problem_attr == 'metrics':
                problem_data['metrics'] = []
                for metric in problem.metrics:
                    metric_data = {}
                    metric_data['measures'] = []
                    for metric_attr in metric.__dict__: 
                        if metric_attr == 'scale':
                            metric_data[metric_attr] = getattr(metric, metric_attr).axis_label
                        elif metric_attr != 'measures':
                            metric_data[metric_attr] = getattr(metric, metric_attr)
                        elif metric_attr == 'measures':
                            for measure in getattr(metric, 'measures'):
                                measure_data = {}
                                for measure_attr in measure.__dict__:
                                    measure_data[measure_attr] = getattr(measure, measure_attr)
                                metric_data['measures'].append(measure_data)
                    problem_data['metrics'].append(metric_data)
        output['problems'].append(problem_data)
    if indent:
        with open(default_name, 'w') as f:
            f.write(json.dumps(output, default=str, indent=4))
    else:
        with open(default_name, 'w') as f:
            f.write(json.dumps(output, default=str))
        
export_json(indent=True)
In [55]:
%%javascript
// # hiddencode
if (document.location.hostname == "localhost" && document.location.port != null) {
    console.log($(".local-edit").show(500));
}