--- library_name: "PyTorch" tags: - cnn - lenet - cifar10 - image-classification datasets: - uoft-cs/cifar10 language: - en metrics: - accuracy --- # CIFAR10 LeNet5 Variation 2: GELU + Dropout Layer This repository contains our second variation of the original LeNet5 architecture adapted for CIFAR-10. The model consists of two convolutional layers followed by two fully connected layers a dropout layer (p=0.5) and a final fully connected layer, using linear (GELU) activations, extending variation 1, and Kaiming uniform initialization. It is trained with a batch size of 32 using the Adam optimizer (learning rate 0.001) and CrossEntropyLoss. In our experiments, this model achieved a test loss of 0.0316 and a top-1 accuracy of 64.71% on CIFAR-10. ## Model Details - **Architecture:** 2 Convolutional Layers, 2 Fully Connected Layers, 1 Dropout Layer, 1 Final Fully Connected Layer. - **Activations:** GELU. - **Weight Initialization:** Kaiming Uniform. - **Optimizer:** Adam (lr=0.001). - **Loss Function:** CrossEntropyLoss. - **Dataset:** CIFAR-10. ## Usage Load this model in PyTorch to fine-tune or evaluate on CIFAR-10 using your training and evaluation scripts. --- Feel free to update this model card with further training details, benchmarks, or usage examples.