Step 1 β Choose input & presets
Load an image (scaled to AlexNet size), then explore layers and math.
227Γ227
What is AlexNet?
Paper
Krizhevsky, Sutskever, Hinton (2012)
Key ideas
Large kernels early (11Γ11 s4), ReLU, LRN, MaxPool, deep conv stack, big FC layers, Dropout
Input
~227Γ227Γ3 image
Output
1000-way ImageNet classification
ReLU nonlinearity
Local Response Norm (LRN)
MaxPool 3Γ3 s2
Dropout in FC
Step 2 β Shapes, Params, FLOPs, Receptive Field
These are computed live from the classic AlexNet hyperparameters.
| Layer | Kernel / Stride / Pad | In (HΓWΓC) | Out (HΓWΓC) | Params | FLOPs (β) | RF |
|---|
Step 3 β Toy forward pass (Conv1 β ReLU β Pool1)
We apply a small set of handcrafted Conv1-like filters to illustrate feature maps (for speed).
Note: Real AlexNet uses 96 learned 11Γ11Γ3 filters at stride 4 (we show a tiny subset on a downscaled buffer).