About Me
I’m a Senior Applied Scientist at Amazon AGI, working on large-language-model (LLM) training that blends speech and audio toward more natural, interactive intelligence.
Earlier in my time at Amazon, I worked on efficient speech-processing models for Alexa devices. I’ve led research in neural efficiency, developing sub-8-bit quantization-aware training and sparsification methods (including structured 2:4 sparsity)
that improved model size, latency, and accuracy in production systems.
My work has been applied in Echo products used by many customers.
I’ve published papers in several AI and speech venues such as ACL, EMNLP, Interspeech, ICASSP, and IEEE SLT, and co-authored patents.
I completed my Ph.D. in Computer Science and Cognitive Science at Indiana University, where I worked on neural waveform coding inspired by human learning.
I sport indoor & outdoor; interact with nature: all as key for me, if not more, to approaching the meaning of life.