Anurag Katakkar

I am a Deep Learning Software Engineer at NVIDIA, where I work on the TensorRT Team. Currently my work mainly involves the testing and integration of cuTensor, an Einstein Notation based linear algebra library into TensorRT's backend for supporting automotive applications.

Previously, I graduated from the Language Technologies Institute, School of Computer Science at Carnegie Mellon University with a Master of Science in Computational Data Science. I was advised by Alan W Black, and Eric Nyberg.

At CMU, I've had the opportunity of exploring varied lines of research under Machine Learning and Natural Language Processing, including Speech Synthesis, Robustness, Information Retrieval, and Explainability.

My primary research at CMU focused on Speech Synthesis and Language Modeling in the speech domain, which was also the topic of my capstone. The focus of this research is to bring attention (no pun intended) to pure speech language models, ground them in lingustic sub-word units, and highlight how speech language models are still relatively poorly understood as compared to their cousins in the text domain. If this interests you, read our paper that takes a "first principles" approach to language modelling and describes our experiments and findings in more detail.

I am also interested in studying the robustness of machine learning models in an out-of-domain setting, and exploring new techniques such as Feature Feedback that could help make models more robust. In my second paper, we present empirical evidence that training with feature feedback improves out-of-domain performance of large pretrained language models like BERT in some tasks such as Sentiment Analysis.

For a complete list of projects and experience, please see my CV (pdf).

News

Dec, 2021
Oct 31, 2021
Oct 14, 2021
July 7, 2021

I've started working as a Deep Learning Software Engineer, TensorRT, at NVIDIA!

May 23, 2021

Graduated from CMU with a Master's in Computational Data Science!