Density Estimation

420 papers with code • 14 benchmarks • 14 datasets

The goal of Density Estimation is to give an accurate description of the underlying probabilistic density distribution of an observable data set with unknown density.

Source: Contrastive Predictive Coding Based Feature for Automatic Speaker Verification

Libraries

Use these libraries to find Density Estimation models and implementations

Most implemented papers

Denoising Diffusion Probabilistic Models

hojonathanho/diffusion NeurIPS 2020

We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics.

Density estimation using Real NVP

tensorflow/models 27 May 2016

Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning.

Glow: Generative Flow with Invertible 1x1 Convolutions

openai/glow NeurIPS 2018

Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis.

Importance Weighted Autoencoders

AntixK/PyTorch-VAE 1 Sep 2015

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference.

Masked Autoregressive Flow for Density Estimation

gpapamak/maf NeurIPS 2017

By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow.

MADE: Masked Autoencoder for Distribution Estimation

mgermain/MADE 12 Feb 2015

There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples.

Conditional Image Generation with PixelCNN Decoders

openai/pixel-cnn NeurIPS 2016

This work explores conditional image generation with a new image density model based on the PixelCNN architecture.

Progressive Distillation for Fast Sampling of Diffusion Models

google-research/google-research ICLR 2022

Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps.

Score-Based Generative Modeling through Stochastic Differential Equations

yang-song/score_sde ICLR 2021

Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9. 89 and FID of 2. 20, a competitive likelihood of 2. 99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.

PointConv: Deep Convolutional Networks on 3D Point Clouds

DylanWusee/pointconv CVPR 2019

Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.