Anirudh (Ani) Ajith

NLP Researcher @ Princeton University

prof_pic.jpg

Hello! I’m a second-year Master’s student at Princeton University advised by Dr. Karthik Narasimhan. I’ve also been fortunate to collaborate with Dr. Danqi Chen during my time here. Prior to this, I graduated from IIT Madras where I worked with Dr. Mitesh Khapra and Dr. Pratyush Kumar as a part of AI4Bharat.

I am interested in large language models, natural language processing, and deep learning. I look forward to continuing my research career by focusing on two key directions:

  1. expanding the scope of problems that LLMs and systems built around them can solve by addressing their key contemporary limitations, and
  2. developing principled techniques for ensuring that real-world deployment of LLM-based systems can take place in socially responsible ways.

I am particularly excited by language agents, tool-use in LLMs and the potential for these advances to enable concerted system-2 problem solving using LLMs. I aim to contribute to the evolution of LLMs to enable them to perform sophisticated functions such as complex planning, mathematical proof-writing, advanced programming and open-ended problem solving, perhaps even evolving into partners in scientific research. I would like to be a part of effort to push LLMs beyond objects of simple academic interest towards realizing the transformative potential of AI to solve real-world challenges.

Feel free to check out publications, projects, internships, and my full resume!

news

Mar 14, 2024 InstructEval: Systematic Evaluation of Instruction Selection Methods is accepted to Findings of NAACL 2024!
Jan 15, 2024 Detecting Pretraining Data from Large Language Models is accepted at ICLR 2024!
Oct 27, 2023 Our paper InstructEval: Systematic Evaluation of Instruction Selection Methods earns a Spotlight acceptance at the R0-FoMo Workshop @ NeurIPS 2023! Also, our Detecting Pretraining Data from Large Language Models is accepted for an Oral presentation at the Regulatable ML Workshop @ NeurIPS 2023!
Oct 07, 2023 Our paper Adapting Language Models to Compress Contexts is accepted at EMNLP 2023!
Jun 05, 2023 I’m interning with Dr. Danish Pruthi at IISc, Bangalore this summer! We plan to explore the effects of watermarking LLMs.

selected publications

  1. minkprob.png
    Detecting Pretraining Data from Large Language Models
    Weijia Shi*, Anirudh Ajith*, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer
    International Conference on Learning Representations (ICLR) 2024
  2. instructeval.png
    InstructEval: Systematic Evaluation of Instruction Selection Methods
    Anirudh Ajith, Chris Pan, Mengzhou Xia, Ameet Deshpande, and Karthik Narasimhan
    Findings of North American Chapter of the Association for Computational Linguistics (NAACL) 2024
  3. Performance Trade-offs of Watermarking Large Language Models
    Anirudh Ajith, Sameer Singh, and Danish Pruthi
    ArXiv
  4. autocompressor.png
    Adapting Language Models to Compress Contexts
    Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen
    Empirical Methods in Natural Language Processing (EMNLP) 2023