Note on Computer Vision Lecture Notes from CS559. TOC {:toc} Lec01: Image Formation In principle, digital images are formed by measuring energy (counting photons) over an array. But several pre-processing steps makes it interesting and relevant to processing.
Note on Behavioral Study History of Behavioral Science Two historical trends of study combines into modern comparative cognition / behavioral science study. Comparative Psychology More in context of psychology: Athropocentric
Note on Network Commnunication TOC {:toc} General Introduction Network connects devices to transfer data / information. LAN and WAN LAN: Localized network, connected machines in the same area. WAN: Wide area, Internet is the largest WAN! The 2 types are less distinct now, they are blurred because of cellular tech and wireless network.
看得见还是看不见, 这是一个问题! 如果问你, 你现在看到了什么, 你可以不假思索地说, 就是眼前的屏幕嘛. 如果稍微受过一点生理学或者神经科学训练的人也许会说, 我们看到的是这个充满电磁波的世界里, 光学波段辐射被眼球的光学元件折射之后, 落在我们视网膜上的影像. 然而我们不总是能看见落到我们视网膜上的东西. 稍加反思, 我们都会想到自己夏日午后看着窗外出神的时候, 其实并没有看见窗外有什么景色;对着讲台发呆的时候也没有看到黑板上的字是什么;对着击球手飞速击出的棒球同样略过了视网膜, 但是否产生了知觉只有他知道. 简而言之, 就像英语中see(看到)与look(看的动作)不同, 我们看到的东西绝不是落在我们视网膜上的光子所携带的所有信息1——你看不见不可见光也看不到偏振. 那么是什么决定了我们能看到什么以及看到的世界是什么样子呢? 我们可以拍脑门的说, 当我们用心去看就能看到(比如黑板或者窗外), 如果不用心, 那刺激就被我忽视(Neglect)了嘛——的确没错, 这就将我们引导到了视觉注意(Visual Attention)的领地. 不过这篇post中我们先不讨论注意的问题, 而要去关注一些与视觉意识相关的更基本的情况. 什么时候我们看不到 (When we do not see) 首先, 我们将自然环境以及落在视网膜上的光子一概称为物理刺激(Physical stimuli), 我们感觉到的那个世界称为主观感受(Perception)2. 那么刺激与感受不完全匹配的情况其实比比皆是, 下面举几个例子:
TOC {:toc} Problem Setting The original problem of non-negative matrix factorization is simple, if the dissimarity $D(A\|HW)$ between original matrix and reconstructed one is L2 distance than, $$ argmin_{H,W} \|A-HW\|_F^2, \\ s.t.\ W\succeq0, H\succeq0 $$The non-negative constraint applies element-wise.
Note on Selective Attention From Christofer Koch 2013 Lecture Visual Attention and Consciousness Goldstein, Chapter Attention In nature language Attention refers to a family of abilities Vigilance / overall attention Selective attention: processing sth in the cost of others Distributed attention Automaticity (action / perception task that does not take capacity) Selective attention is different from general attention —- arousal.
Note on Computation by Biological Plausible Learning from lecture of Cengiz Penleven 2019 Philosophy Neural dynamics can be a substrate of computation. The neural dynamics and plasticity dynamics can both do optimization, and the biological constraint form a source of constraint on variables.
TOC {:toc} Constrained CMA-ES Algorithm Target CMA-ES is originally used in unconstrained optimization. To adapt it into constrained optimization and we have to handle the boundary in some way. So how could it handle this geometric boundary?
TOC {:toc} 最近在阅读1,是以为记。 Objective of Algorithm 目标 Belief Propagation算法想解决的是Markov随机场,Bayes网络等图模型的边缘概率估计,以及求解最可能的状态的问题。 有许多名字称呼这一General的算法,如sum-product, max-product, min-sum, Message Passing等,属于更general的Message Passing算法范畴。 同时这一算法可以说是一种通用框架或者philosophy,因此在不同结构的模型中有许多著名的特例,这些具体算法也有各自的名字(如前向后向算法,Kalman Filter等等) 对于统计学习问题,通常会区分模型与算法,模型设定一些假设,抽象现实的某个方面,建立问题的结构;而算法求解问题(很多时候是转化为优化问题来求解)。在这个post中将要介绍的Belief Propagation算法,属于后者,但为了理解他,我们首先需要理解他对应的模型,即概率图模型。 Graphical Models: What relates graph to probability? 第一次接触概率图模型的人(像我)都会问,概率和图这两者有什么关系呢? 我们知道图是一种直观的表征事物之间二元关系的方法通常由$(\mathcal V, \mathcal E)$定点和边组成。在概率图模型中,顶点通常代表随机变量,而边代表随机变量之间的关系。
Multi-Indexing Just like in excel, you can have multi indexing columns for a table. Level and Columns reset_index transform existing indices to columns set_index transform columns to indices set_index(..., append=True) will add a new level index. If False it will discard all levels except the new one. This name is really intuitive, set_index vs reset_index