
DISCOVER PROJECT
Deep Learning
From Classic Datasets to Modern CNNs – Deep Learning Experiments in Vision and Classification
Deep learning lab series exploring neural networks from simple perceptrons on tabular data to convolutional architectures on CIFAR and MNIST image datasets. The work progresses from scikit-learn baselines to Keras/TensorFlow implementations, covering model design, training, evaluation, and comparison of architectures on increasingly complex tasks.
PYTHON - NUMPY - PANDAS - SCIKIT-LEARN - TENSORFLOW / KERAS - MATPLOTLIB / SEABORN MNIST - LOGISTIC REGRESSION - MLPS - CNNS

Project Overview
On structured data, Implementing logistic regression and multilayer perceptrons for the Iris dataset, including one‑hot encoding, 75/25 train–test splits, and systematic comparison of training vs test accuracy. reimplementing a similar perceptron in Keras, controlling architecture (hidden units, activations) and loss/optimizer choices, and extended this design to MNIST by flattening 28×28 images, normalizing pixels, and adapting the output layer to 10 digit classes. On images, I'built a custom CNN for CIFAR‑10 with multiple convolutional–pooling blocks and dropout, achieving strong performance, and then implemented a VGG16‑inspired network plus a more efficient BatchNorm‑based CNN, training and evaluating while comparing accuracy, overfitting behavior.
The objective of this deep learning work was to build an end‑to‑end understanding of neural networks by starting with classical classifiers (logistic regression, shallow MLPs) and gradually moving to deeper architectures tailored for images. The focus was on implementing and adapting architectures such as custom CNNs and VGG16‑like networks to CIFAR‑10/100 and MNIST, analyzing their training dynamics, generalization performance, and trade-offs between depth, capacity, and efficiency.




