Statistical Learning

Note on S4-Efficiently Modeling Long Sequences with Structured State Spaces

[TOC] Motivation S4 sequence model is rising in the sequence modelling field. It dominates on long sequence modelling over RNN, LSTM and transformers. It’s both mathematically elegant and useful, and it’s trending, so why not write about it.

Jul 17, 2022

Note on Equivalent Score Matching Objective

TOC {:toc} Motivation Consider a distribution $p(x)$, we could “convolve” it with a kernel $p(\tilde{x}\mid x)=q(\tilde{x}-x)$. The marginal distribution of $\tilde{x}$ is denoted as $p_\sigma(\tilde{x})$. We want to model the score of this convolved distribution and that of the original distribution $\nabla\log p_\sigma(\tilde{x})$ .

Jun 20, 2022

Note on EM algorithm and likelihood lower bound

Motivation How to understand EM algorithm from a theoretical perspective? This post tries to understand EM as a form of alternative ascent of a lower bound of likelihood. The Key Trick of EM The key trick we need to remember is the usage of Jensen Inequality on logarithm. So we could swap Expectation and logarithm and obtain a lower bound on likelihood. Generally, we have such inequality, given a positive function $q(z)$ that sums to $1$ (probability density),

May 13, 2022

Note on Diffusion Generative Models

TOC {:toc} Motivation Recently, a line of research emerged in generative image models, diffusion models, which showed a competitive performance with GAN [^1]. More recently, a larger scale version of it gave rise to the ground breaking model DALL-E 2 and its precursor GLIDE.

May 9, 2022

Probabilistic Models of Neural Population

Motivation Here we summarize a few common probabilistic neural population models. Adapted from reading notes and class presentations from Neuro QC316 taught by Jan Drugowitsch. LNP, GLM These are the simplist models of neurons.

Apr 16, 2022

Note on Kernel PCA

TOC {:toc} Motivation Simply put, “kernel trick” is the finding that sometimes only inner product appears in the formulation of some algorithms. because of this, we could substitute the inner product with some fancier kernel function, i.e. inner product in some other spaces. This post is about another usage of kernel trick. Another usage is Kernel (ridge) Regression.

Mar 21, 2022

Note on Kernel Ridge Regression

TOC {:toc} Motivation Understand the use of kernel in regression problems. For usage in unsupervised learning / dimension reduction, see notes on Kernel PCA. Kernel in Classification Kernel is usually introduced in SVM classification problems. The rationale is that a linearly non-separable dataset could be separable in a high-dimensional feature space using the mapping $\phi:\mathcal X\to\mathcal F$ .

Dec 17, 2021

Note on Modern Hopfield Network and Transformers

Motivation There is a resurgent of interest in investigating and developing Hopfield network in recent years. This development is quite exciting in that it connect classic models in physics and machine learning to modern techniques like transformers.

Nov 15, 2021

Note on Hopfield Network

Rationale Hopfield Network can be viewed an energy based model: deriving all properties from it. General RNN has many complex behaviors, but setting symmetric connections can prohibit it! No oscillation is possible in a symmetric matrix.

Nov 15, 2021

Note on Word2Vec

Motivation Word2Vec is a very famous method that I heard of since the freshman year in college (yeah it comes out in 2013). Recently, some reviewer reminds us of the similarity of the “analogy” learnt by the vector representation of words and the vector analogy of image space in GAN or VAE.

Nov 27, 2020