Deep learning brain segmentation
Overview: This project focuses on automatically identifying and outlining brain structures in zebrafish imaging data using deep learning. Instead of a researcher manually tracing each region by hand, the model learns patterns in the images and produces consistent segmentation masks. The goal is to make early stage analysis faster, reduce human error, and give laboratories a practical tool they can plug into existing workflows without needing a full Machine Learning team.
Tools used: Python, PyTorch, NumPy, and OpenCV.
Approach: I began by organizing the raw image and mask files into a clean dataset, then applied standard preprocessing steps such as intensity normalization, resizing, and simple data augmentation so the model would see enough variation during training. I implemented a U Net style architecture in PyTorch, because it is widely used for medical and biological segmentation tasks and balances accuracy with training time. During training, I monitored both loss curves and intersection over union scores on a held out validation set, adjusting learning rate schedules, batch sizes, and augmentation strength when the model started to overfit. After training, I ran a set of qualitative checks by overlaying predicted masks on the original images so I could visually confirm that boundaries were smooth, stable, and aligned with the ground truth labels.
Impact: The final pipeline turns a folder of zebrafish brain images into ready to use segmentation masks with a single script, which significantly reduces the time a researcher spends on manual annotation. Because the data loading, preprocessing, model training, and evaluation steps are all scripted, results can be reproduced on new machines or extended with different model variants. This makes it easier for teams to compare experiments fairly and to share their process with collaborators who may not have deep experience in Machine Learning and Artificial Intelligence.