How can we make AI/ML systems more safe, trustworthy, and scalable? How can we reduce the cost of inference in language models while maintaining accuracy and trust? How can we effectively use language models in mission-critical domains?
My name is Nolan Platt, and I seek to answer the above questions rigorously and defensibly through my research. I am a senior at Virginia Tech studying Computer Science.
Previously, I was a Data Science Intern at Hitachi Vantara Federal in Washington D.C., where I focused on machine learning for the federal government.
@inproceedings{platt2025catching,title={Catching {UX} Flaws in Code: Leveraging {LLMs} to Identify Usability Flaws at the Development Stage},author={Platt, Nolan and Luchs, E. and Nizamani, Sehrish},booktitle={Proceedings of the 2025 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)},pages={152--158},year={2025},publisher={IEEE,},doi={10.1109/VL-HCC65237.2025.00024},}
IEEE FLLM
Multi-Model Synthetic Training for Mission-Critical Small Language Models
Nolan Platt and Pragyansmita Nayak
In Proceedings of the Third International Conference on Foundation and Large Language Models (IEEE FLLM), 2025
@inproceedings{platt2025multimodel,title={Multi-Model Synthetic Training for Mission-Critical Small Language Models},author={Platt, Nolan and Nayak, Pragyansmita},booktitle={Proceedings of the Third International Conference on Foundation and Large Language Models (IEEE FLLM)},year={2025},publisher={IEEE,},}
etc
Outside of research, I enjoy ultradistance running, biking, skiing, and scuba diving. A brief synopsis of recent and future (i.e., planned) races: