This project focuses on training a Multi-Layer Perceptron (MLP) on the MNIST dataset, evaluating its performance against Random Forest (RF) and Logistic Regression models. The analysis includes F1-score and confusion matrix metrics, alongside visualizations using t-SNE to understand the embeddings from the MLP’s hidden layers.