Convolution Operation: The Foundation of CNNs
Interactive guide to convolution in CNNs: visualize sliding windows, kernels, stride, padding, and feature detection with step-by-step demos.
Core concepts and building blocks of deep neural networks.
Interactive guide to convolution in CNNs: visualize sliding windows, kernels, stride, padding, and feature detection with step-by-step demos.
Understand cross-entropy loss for classification: interactive demos of binary and multi-class CE, the -log(p) curve, softmax gradients, and focal loss.
Understand dilated (atrous) convolutions: how dilation rates expand receptive fields exponentially without extra parameters and how to avoid gridding artifacts.
Learn how Feature Pyramid Networks build multi-scale feature representations through top-down pathways and lateral connections for robust object detection.
Understand receptive fields in CNNs — how convolutional layers expand their field of view, the gap between theoretical and effective receptive fields, and strategies for controlling RF growth.
Explore VAE latent space in deep learning. Learn variational autoencoder encoding, decoding, interpolation, and the reparameterization trick.
Understand contrastive loss for representation learning: interactive demos of InfoNCE, triplet loss, and embedding space clustering with temperature tuning.
Understand dropout regularization: how randomly silencing neurons prevents overfitting, the inverted dropout trick, and when to use each dropout variant.
Learn focal loss for deep learning: down-weight easy examples, focus on hard ones. Interactive demos of gamma, alpha balancing, and RetinaNet.
Learn He (Kaiming) initialization for ReLU neural networks: understand why ReLU needs special weight initialization, visualize variance flow, and see dead neurons in action.
Learn KL divergence for machine learning: measure distribution differences in VAEs, knowledge distillation, and variational inference with interactive visualizations.
Interactive guide to MSE vs MAE for regression: explore outlier sensitivity, gradient behavior, and Huber loss with visualizations.
Learn Xavier (Glorot) initialization: how it balances forward signals and backward gradients to enable stable deep network training with tanh and sigmoid.
Learn adaptive tiling in vision transformers: dynamically partition images based on visual complexity to reduce token counts by up to 80% while preserving detail where it matters.
Explore emergent abilities in large language models: sudden capabilities that appear at scale thresholds, phase transitions, and the mirage debate, with interactive visualizations.
Master prompt engineering for large language models: from basic composition to Chain-of-Thought, few-shot, and advanced techniques with interactive visualizations.
Deep dive into how different prompt components influence model behavior across transformer layers, from surface patterns to abstract reasoning.
Explore neural scaling laws in deep learning: power law relationships between model size, data, and compute that predict AI performance, with interactive visualizations.
Learn visual complexity analysis in deep learning - how neural networks measure entropy, edges, and saliency for adaptive image processing.
Learn how gradients propagate through deep neural networks during backpropagation. Understand vanishing and exploding gradient problems with interactive visualizations.
Understand the NAdam optimizer that fuses Adam adaptive learning rates with Nesterov look-ahead momentum for faster, smoother convergence in deep learning.
Learn layer normalization for transformers and sequence models: how normalizing across features enables batch-independent training with interactive visualizations.
Understand internal covariate shift in deep learning: why layer input distributions change during training, how it slows convergence, and how batch normalization fixes it.
Learn batch normalization in deep learning: how normalizing layer inputs accelerates training, improves gradient flow, and acts as regularization with interactive visualizations.
Learn how skip connections and residual learning enable training of very deep neural networks. Understand the ResNet revolution with interactive visualizations.