I am an incoming deep learning PhD student at the University of Pennsylvania. My research focuses on making AI/ML systems more safe, trustworthy, and scalable, reducing inference costs in language models while maintaining accuracy and trust, and effectively deploying language models in mission-critical domains.
I am currently in my last semester at Virginia Tech, where I am graduating in three years with highest honors. Previously, I led R&D for Oaklet, a healthtech startup. Before that, I was a Data Science Intern at Hitachi Vantara Federal in Washington D.C., where I focused on machine learning for the federal government.
@inproceedings{platt2025catching,title={Catching {UX} Flaws in Code: Leveraging {LLMs} to Identify Usability Flaws at the Development Stage},author={Platt, Nolan and Luchs, E. and Nizamani, Sehrish},booktitle={Proceedings of the 2025 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)},pages={152--158},year={2025},publisher={IEEE,},doi={10.1109/VL-HCC65237.2025.00024},}
IEEE FLLM
Multi-Model Synthetic Training for Mission-Critical Small Language Models
Nolan Platt and Pragyansmita Nayak
In Proceedings of the Third International Conference on Foundation and Large Language Models (IEEE FLLM), 2025
@inproceedings{platt2025multimodel,title={Multi-Model Synthetic Training for Mission-Critical Small Language Models},author={Platt, Nolan and Nayak, Pragyansmita},booktitle={Proceedings of the Third International Conference on Foundation and Large Language Models (IEEE FLLM)},year={2025},publisher={IEEE,},}
etc
Outside of research, I enjoy running, biking, skiing, and scuba diving. A brief synopsis of recent and future (i.e., planned) races: