Langchain or llama index, those are just integrations. From fine tuning LLM for specific use case to deploying it as a system and integrating all these tools is all I do these days.Private information
I started my ML journey back in late 2020 when I saw this connection between my high school math topics (like calculus, linear algebra, etc) with how neural networks work. 18-year-old me, started his journey with Andrew Ng's famous deep learning course. My early days were very much focussed on how each part of deep learning works, for example
- How a neuron works
- How convolution as an operation works
- How does backpropagation exactly work, etc
I even wrote my first blog on medium describing the nuts and bolts of neural networks and their different types. There was a time in mid 2021, I got to learn about Graph Machine Learning. Being so fascinated, I did the same for Graph ML, like how I did for general ML and deep learning. I even ended up doing a research internship on Graph Variational Autoencoder for recommendation systems. I ended up writing my second research paper.
Fast forward, I slowly started to hear this buzzword called "deployment". Initially, I thought of uploading my model file to the cloud. But it was much more than that. Slowly I started to gain knowledge about ML system design and how the backend and ML work in sync. How we can serve our ML apps by creating APIs, dockerizing them, and then deploying them on cloud services.
Then I got into Major League Hacking Prep Program and I learned so much about remote work and collaborating with open source. Just after that, I got my industrial internship at a startup called voxela.ai where I learned so much about computer vision and object recognition and how it is deployed on an enterprise basis. I also learned about quantization and how to convert these models into TensorRT for fast serving.
Fast forward we saw the rise in Large Language Models after the advent of chatGPT. We are seeing now so many people making tools using that in just one night and things getting viral. I always believed that it was always a bubble. And yes, I was right when I got to learn from experts that it was more about making a long-term, reliable, and efficient product with LLMs.
Currently, I am working as a data science engineer intern at CorridorPlatforms. Being very interested in LLMs, I have explored and incorporated the LLM lifecycle, from fine-tuning an LLM with 4-bit quantization to compressing it further to run on a C++ backend and then serving it, doing prompt engineering for better performance, using knowledge bases to provide better context for document Q&A, to doing different LLM evaluation strategies for better and governed LLM systems. This journey has been very beautiful and remarkable for me.
I believe that nothing is permanent, we are right now in an era where things are changing very fast. However it's not ONLY LLMs or ONLY Diffusion models, and we can never say classical ML is not gonna stay. Real ML is the balanced combination of everything and it's just putting the correct choice in the right place and attaching the important strings such that everything can work in sync and seamlessly.
I look forward to working and helping organizations to apply my learnings and also learning more.