Tech Note

Note on Variants of Diffusion Scheduler, DDPM DDIM PNDM

Motivation Given the popularity and power of diffusion models, the theoretical formulation of these models are not in unison. Because multiple groups have derived these models from different background, there exist multiple formulations, SDE, ODE, Markov Chain, Non-markov chain etc.

Jan 16, 2023

Note on Hippo- Recurrent Memory with Optimal Polynomial Projection

Hippo: Recurrent Memory with Optimal Polynomial Projection Motivation Hidden state in RNN represents a form of memory of the past. For a sequence, a natural way to represent the past sequence is to project it onto an orthonormal basis set. Here depending on the different emphasis of the past, we could define different measures on the time axis and define the basis set based on this measure. Then we can keep track of the projection coefficient on this basis when observing new data points.

Jul 25, 2022

Note on S4-Efficiently Modeling Long Sequences with Structured State Spaces

[TOC] Motivation S4 sequence model is rising in the sequence modelling field. It dominates on long sequence modelling over RNN, LSTM and transformers. It’s both mathematically elegant and useful, and it’s trending, so why not write about it.

Jul 17, 2022

Note on Equivalent Score Matching Objective

TOC {:toc} Motivation Consider a distribution $p(x)$, we could “convolve” it with a kernel $p(\tilde{x}\mid x)=q(\tilde{x}-x)$. The marginal distribution of $\tilde{x}$ is denoted as $p_\sigma(\tilde{x})$. We want to model the score of this convolved distribution and that of the original distribution $\nabla\log p_\sigma(\tilde{x})$ .

Jun 20, 2022

Note on EM algorithm and likelihood lower bound

Motivation How to understand EM algorithm from a theoretical perspective? This post tries to understand EM as a form of alternative ascent of a lower bound of likelihood. The Key Trick of EM The key trick we need to remember is the usage of Jensen Inequality on logarithm. So we could swap Expectation and logarithm and obtain a lower bound on likelihood. Generally, we have such inequality, given a positive function $q(z)$ that sums to $1$ (probability density),

May 13, 2022

Note on Diffusion Generative Models

TOC {:toc} Motivation Recently, a line of research emerged in generative image models, diffusion models, which showed a competitive performance with GAN [^1]. More recently, a larger scale version of it gave rise to the ground breaking model DALL-E 2 and its precursor GLIDE.

May 9, 2022

Note on Kernel PCA

TOC {:toc} Motivation Simply put, “kernel trick” is the finding that sometimes only inner product appears in the formulation of some algorithms. because of this, we could substitute the inner product with some fancier kernel function, i.e. inner product in some other spaces. This post is about another usage of kernel trick. Another usage is Kernel (ridge) Regression.

Mar 21, 2022

Note on Kernel Ridge Regression

TOC {:toc} Motivation Understand the use of kernel in regression problems. For usage in unsupervised learning / dimension reduction, see notes on Kernel PCA. Kernel in Classification Kernel is usually introduced in SVM classification problems. The rationale is that a linearly non-separable dataset could be separable in a high-dimensional feature space using the mapping $\phi:\mathcal X\to\mathcal F$ .

Dec 17, 2021

Note on Modern Hopfield Network and Transformers

Motivation There is a resurgent of interest in investigating and developing Hopfield network in recent years. This development is quite exciting in that it connect classic models in physics and machine learning to modern techniques like transformers.

Nov 15, 2021

Note on Hopfield Network

Rationale Hopfield Network can be viewed an energy based model: deriving all properties from it. General RNN has many complex behaviors, but setting symmetric connections can prohibit it! No oscillation is possible in a symmetric matrix.

Nov 15, 2021