目录
* Feature maps
<https://www.cnblogs.com/nickchen121/p/10923353.html#feature-maps>
* Why not Linear
<https://www.cnblogs.com/nickchen121/p/10923353.html#why-not-linear>
* 335k or 1.3MB
<https://www.cnblogs.com/nickchen121/p/10923353.html#k-or-1.3mb>
* em... <https://www.cnblogs.com/nickchen121/p/10923353.html#em...>
* Receptive Field
<https://www.cnblogs.com/nickchen121/p/10923353.html#receptive-field>
* Fully connnected
<https://www.cnblogs.com/nickchen121/p/10923353.html#fully-connnected>
* Partial connected
<https://www.cnblogs.com/nickchen121/p/10923353.html#partial-connected>
* Locally connected
<https://www.cnblogs.com/nickchen121/p/10923353.html#locally-connected>
* Rethink Linear layer
<https://www.cnblogs.com/nickchen121/p/10923353.html#rethink-linear-layer>
* Fully VS Lovally
<https://www.cnblogs.com/nickchen121/p/10923353.html#fully-vs-lovally>
* Weight sharing
<https://www.cnblogs.com/nickchen121/p/10923353.html#weight-sharing>
* Why call Convolution?
<https://www.cnblogs.com/nickchen121/p/10923353.html#why-call-convolution>
* 2D Convolution
<https://www.cnblogs.com/nickchen121/p/10923353.html#d-convolution>
* Convolution in Computer Vision
<https://www.cnblogs.com/nickchen121/p/10923353.html#convolution-in-computer-vision>
* CNN on feature maps
<https://www.cnblogs.com/nickchen121/p/10923353.html#cnn-on-feature-maps>
Feature maps
* 单通道
* rgb三通道
* rgb三通道合成
* 数字2的卷积成像图
Why not Linear
* 4 Layers: [784, 256, 256, 256, 10]
335k or 1.3MB
em...
* 486 PC + AT&T DSP32C
* 256KB
* 66Mhz
*
Batch X
*
Gradient Cache
*
etc.
Receptive Field
Fully connnected
Partial connected
Locally connected
Rethink Linear layer
Fully VS Lovally
Weight sharing
* 三阶张量的卷积
* 6 Layers
* ~60k parameters
* 4 layers, 335k
Why call Convolution?
2D Convolution
\[ y(t) = x(t)*h(t) = \int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau \]
Convolution in Computer Vision
* 模糊化
* 边缘检测
CNN on feature maps
热门工具 换一换