Skip to content
View TryingtobeingNikhil's full-sized avatar

Block or report TryingtobeingNikhil

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Debugging life one bug at a time @ BIT Mesra

πŸ‘¨β€πŸ’» About Me

I'm passionate about applying machine learning to solve real-world problems, with a strong focus on Deep Learning, Computer Vision, and NLP. I specialize in building efficient AI systems through model optimization, parameter-efficient fine-tuning, and production-ready ML pipelines.

Occasionally debugging neural networks, datasets, and my own assumptions.


πŸ“« Connect With Me

I promise I respond faster than my model's training time πŸš€


πŸ’Ό Experience

🏒 Founding Member β€” ML & Product @ HireBuddy (Live) | May 2025 – Nov 2025

  • Built and deployed an AI-powered resume-job matching system from 0β†’1 using RAG pipelines and Transformers, improving shortlisting accuracy by 35% and processing 10K+ resumes/day
  • Designed end-to-end NLP pipelines for job-fit scoring, candidate ranking, and automated feedback generation using LangChain, achieving 92% model performance
  • Deployed ML models via Flask + GCP with Docker containerization; integrated REST APIs with recruiter-facing product, increasing user engagement by 40%
  • Collaborated with founders on ML-product strategy, translating hiring workflows into data-driven features

The model reviewed more resumes in a day than I probably will in my lifetime.


πŸ€– Machine Learning Engineer @ IIIT Delhi Hackathon (Finalist) | Aug 2024

  • Trained ResNet50 classifier achieving 94% accuracy; reduced training time by 30% via mixed-precision training and optimized data loading

Turns out standing on the shoulders of pre-trained giants is a solid strategy.


πŸ’» ML & Backend Developer @ Smart India Hackathon (Finalist) | Oct 2024

  • Built ML-powered hospital queue system reducing predicted patient wait time by 28%
  • Deployed Flask APIs on Google Cloud Platform with 99.5% uptime

Built this during the hackathon, survived on coffee and optimism.


πŸš€ Featured Projects

🧬 Pruned U-Net for Biomedical Image Segmentation | IIT Kharagpur · Mar 2025

Built a structured magnitude pruning pipeline for U-Net achieving 97.3% parameter reduction and 92% FLOP reduction while maintaining IoU > 0.95 on MoNuSeg dataset. Fully reproducible via config.

Tech: Python, TensorFlow, CNN, Model Optimization

97.3% smaller, 100% as accurate. Marie Kondo would be proud.


Implemented a GPT-style Transformer Language Model from scratch in PyTorch β€” multi-head self-attention, positional encoding, residual connections; modular pipeline with configurable depth and context length.

Tech: Python, PyTorch, Transformers

Built to answer "But how does attention actually work?" Spoiler: it's all matrix multiplication.


Applied LoRA adapters on PEGASUS via HuggingFace PEFT; reduced trainable parameters 767M β†’ 1.57M (99.8%) and achieved 27Γ— faster training vs. full fine-tuning. Served via FastAPI endpoint with Docker packaging.

Tech: Python, PyTorch, HuggingFace, PEFT, LoRA, FastAPI, Docker

Because training 767M parameters is expensive, and I'm not Google.


πŸ› οΈ Languages and Tools

Python
Python
C++
C++
SQL
SQL
PyTorch
PyTorch
TensorFlow
TensorFlow
Scikit-learn
Scikit-learn
NumPy
NumPy
Pandas
Pandas
HuggingFace
HuggingFace
OpenCV
OpenCV
Flask
Flask
GCP
GCP
Docker
Docker
Git
Git
Jupyter
Jupyter
Linux
Linux
VS Code
VS Code
GitHub
GitHub

πŸ† Achievements

  • πŸ₯‡ Finalist - Smart India Hackathon 2024
  • πŸ₯‡ Finalist - IIIT Delhi Hackathon 2024
  • πŸ’» Codeforces - Pupil (1300+ rating)
  • 🎯 200+ problems solved across competitive programming platforms

Currently grinding LeetCode and Codeforces. Send encouragement (or better yet, test cases).


Profile Views

⚑ Open to Summer 2026 research internships and collaboration opportunities in ML/AI

P.S. If you're reading this, my SEO worked or you're really bored. Either way, hi!

Pinned Loading

  1. TryingtobeingNikhil TryingtobeingNikhil Public

    Its me

    2

  2. AttentionIsALLICode AttentionIsALLICode Public

    πŸ€– Complete Transformer architecture from scratch in PyTorch - Multi-head attention, positional encoding, encoder-decoder | "Attention Is All You Need" paper implementation

    Python 4

  3. Pruned-UNet-Biomedical-Segmentation Pruned-UNet-Biomedical-Segmentation Public

    πŸ₯ 97.3% parameter reduction in U-Net for biomedical segmentation @ IIT Kharagpur | Pruning, quantization, knowledge distillation | IoU > 0.95 on MoNuSeg

    Python 4

  4. pegasus-lora-efficient-summarization pegasus-lora-efficient-summarization Public

    Step-by-step implementation of PEGASUS summarization model with LoRA fine-tuning. Includes complete notebooks, training scripts, and evaluation using ROUGE metrics.

    Jupyter Notebook 4

  5. Vectorless-RAGs Vectorless-RAGs Public

    A lightweight, reasoning-based RAG system that uses LLM-guided tree traversal instead of traditional vector databases and embeddings. It's clean, interpretable, and highly efficient.

    Python 6