Debugging life one bug at a time @ BIT Mesra
I'm passionate about applying machine learning to solve real-world problems, with a strong focus on Deep Learning, Computer Vision, and NLP. I specialize in building efficient AI systems through model optimization, parameter-efficient fine-tuning, and production-ready ML pipelines.
Occasionally debugging neural networks, datasets, and my own assumptions.
I promise I respond faster than my model's training time π
π’ Founding Member β ML & Product @ HireBuddy (Live) | May 2025 β Nov 2025
- Built and deployed an AI-powered resume-job matching system from 0β1 using RAG pipelines and Transformers, improving shortlisting accuracy by 35% and processing 10K+ resumes/day
- Designed end-to-end NLP pipelines for job-fit scoring, candidate ranking, and automated feedback generation using LangChain, achieving 92% model performance
- Deployed ML models via Flask + GCP with Docker containerization; integrated REST APIs with recruiter-facing product, increasing user engagement by 40%
- Collaborated with founders on ML-product strategy, translating hiring workflows into data-driven features
The model reviewed more resumes in a day than I probably will in my lifetime.
π€ Machine Learning Engineer @ IIIT Delhi Hackathon (Finalist) | Aug 2024
- Trained ResNet50 classifier achieving 94% accuracy; reduced training time by 30% via mixed-precision training and optimized data loading
Turns out standing on the shoulders of pre-trained giants is a solid strategy.
π» ML & Backend Developer @ Smart India Hackathon (Finalist) | Oct 2024
- Built ML-powered hospital queue system reducing predicted patient wait time by 28%
- Deployed Flask APIs on Google Cloud Platform with 99.5% uptime
Built this during the hackathon, survived on coffee and optimism.
𧬠Pruned U-Net for Biomedical Image Segmentation | IIT Kharagpur · Mar 2025
Built a structured magnitude pruning pipeline for U-Net achieving 97.3% parameter reduction and 92% FLOP reduction while maintaining IoU > 0.95 on MoNuSeg dataset. Fully reproducible via config.
Tech: Python, TensorFlow, CNN, Model Optimization
97.3% smaller, 100% as accurate. Marie Kondo would be proud.
π€ AttentionIsALLICode: GPT-style Transformer LM | Oct 2025
Implemented a GPT-style Transformer Language Model from scratch in PyTorch β multi-head self-attention, positional encoding, residual connections; modular pipeline with configurable depth and context length.
Tech: Python, PyTorch, Transformers
Built to answer "But how does attention actually work?" Spoiler: it's all matrix multiplication.
π PEGASUS + LoRA: Parameter-Efficient Fine-Tuning | Feb 2026
Applied LoRA adapters on PEGASUS via HuggingFace PEFT; reduced trainable parameters 767M β 1.57M (99.8%) and achieved 27Γ faster training vs. full fine-tuning. Served via FastAPI endpoint with Docker packaging.
Tech: Python, PyTorch, HuggingFace, PEFT, LoRA, FastAPI, Docker
Because training 767M parameters is expensive, and I'm not Google.
|
Python |
C++ |
SQL |
PyTorch |
TensorFlow |
Scikit-learn |
|
NumPy |
Pandas |
HuggingFace |
OpenCV |
Flask |
GCP |
|
Docker |
Git |
Jupyter |
Linux |
VS Code |
GitHub |
- π₯ Finalist - Smart India Hackathon 2024
- π₯ Finalist - IIIT Delhi Hackathon 2024
- π» Codeforces - Pupil (1300+ rating)
- π― 200+ problems solved across competitive programming platforms
Currently grinding LeetCode and Codeforces. Send encouragement (or better yet, test cases).
β‘ Open to Summer 2026 research internships and collaboration opportunities in ML/AI
P.S. If you're reading this, my SEO worked or you're really bored. Either way, hi!




