Hire junior freelance Generative AI developers

Generative AI developers looking for their next gig. Juniors to seniors and everyone in between, you'll find them all here.


Showing available developers. Reset filters

Developer's avatar

Langchain or llama index, those are just integrations. From fine tuning LLM for specific use case to deploying it as a system and integrating all these tools is all I do these days.

Hello everyone I started my ML journey back in late 2020 when I saw this connection between my high school math topics (like calculus, linear algebra, etc) with how neural networks work. 18-year-old me, started his journey with Andrew Ng's famous deep learning course. My early days were very much focussed on how each part of deep learning works, for example How a neuron works How convolution as an operation works How does backpropagation exactly work, etc I even wrote my first blog on medium describing the nuts and bolts of neural networks and their different types. There was a time in mid 2021, I got to learn about Graph Machine Learning. Being so fascinated, I did the same for Graph ML, like how I did for general ML and deep learning. I even ended up doing a research internship on Graph Variational Autoencoder for recommendation systems. I ended up writing my second research paper. Fast forward, I slowly started to hear this buzzword called "deployment". Initially, I thought of uploading my model file to the cloud. But it was much more than that. Slowly I started to gain knowledge about ML system design and how the backend and ML work in sync. How we can serve our ML apps by creating APIs, dockerizing them, and then deploying them on cloud services. Then I got into Major League Hacking Prep Program and I learned so much about remote work and collaborating with open source. Just after that, I got my industrial internship at a startup called voxela.ai where I learned so much about computer vision and object recognition and how it is deployed on an enterprise basis. I also learned about quantization and how to convert these models into TensorRT for fast serving. Fast forward we saw the rise in Large Language Models after the advent of chatGPT. We are seeing now so many people making tools using that in just one night and things getting viral. I always believed that it was always a bubble. And yes, I was right when I got to learn from experts that it was more about making a long-term, reliable, and efficient product with LLMs. Currently, I am working as a data science engineer intern at CorridorPlatforms. Being very interested in LLMs, I have explored and incorporated the LLM lifecycle, from fine-tuning an LLM with 4-bit quantization to compressing it further to run on a C++ backend and then serving it, doing prompt engineering for better performance, using knowledge bases to provide better context for document Q&A, to doing different LLM evaluation strategies for better and governed LLM systems. This journey has been very beautiful and remarkable for me. I believe that nothing is permanent, we are right now in an era where things are changing very fast. However it's not ONLY LLMs or ONLY Diffusion models, and we can never say classical ML is not gonna stay. Real ML is the balanced combination of everything and it's just putting the correct choice in the right place and attaching the important strings such that everything can work in sync and seamlessly. I look forward to working and helping organizations to apply my learnings and also learning more. Thanks

Developer's avatar

Software Engineer | Determined, persistent, lifelong-learner

I am motivated to apply to one of the most groundbreaking industries of our time, to contribute to the efforts of utilizing AI for the betterment of society, and contributing against any potential alignment threat, by any capacity I'm entrusted, to the utmost of my abilities and without compromise. I am driven by the transformative potential of AI and automation, and how they can drive meaningful change. Since February 2023, I have been independently working on https://librechat.ai, an enhanced ChatGPT clone with multi-user login system, multiple AI providers to choose from, and AI agency through plugins, which now has over 1000 stars and 280 forks on github. In May, I shared my progress on reddit of reverse-engineering the ChatGPT plugins functionality before the OpenAI functions were released, and before many chat interfaces employed similar techniques. I was able to use the OpenAI API for agency of the AI with tools through LibreChat, inter-weaving into AI conversations the use of text-to-image generation with DALL-E and stable diffusion, as well as use of Wolfram, search engines, and web-scraping, as selected by the user. The post received over 100 upvotes and 20,000 views: https://www.reddit.com/r/GPT3/comments/13ggcv5/reverseengineeringchatgpt_plugins/. Since then, I've expanded the same reverse engineering through OpenAI functions. I have working proof-of-concepts in both python and JavaScript that mimic ChatGPT's handling of OpenAPI specs, and as well as for utilizing vector retrieval for expanded/improved context through documents. As I've already shared the Clone project, I would like to share my quick MVP of a python FastAPI that serves some of these functionalities: https://github.com/danny-avila/ai-services. As a scheduling manager who has learned to program, I have been able to create my own tools. I've developed automatic assignment and balancing tools, which I shared a video of on LinkedIn, showing over 400 real shifts for a single pay period being automated and balanced within a few minutes. This task would take me hours if not days with my previous tools and paid software solution. Due to the pricing and limitations of scheduling in the current SaaS market, I was inspired to take on this task, as my previous pre-planning and analysis methods had reached their limits. My goal is to provide a user-friendly interface for the next manager and make the backend even more flexible and robust. I'm currently studying machine learning and linear programming to advance optimization. You can see the video I shared here: https://www.linkedin.com/posts/danny-avilaprogramming-automation-scheduling-activity-7036836068393906176-TKCC?utmsource=share&utmmedium=memberdesktop. I documented my progress, from beginning to end, on designing and scaling a legacy API into micro-services, serving up to 1000 requests per second, with less than 20 ms response time per request, and 0% error rates at this throughput under realistic test scenarios: https://gist.github.com/danny-avila/1387fef054da77737e1ce4d04172afe4. To achieve this, I utilized Postgres, express, NGINX, Redis, AWS EC2 instances, along with Pandas and NumPy for the ETL process of the legacy data. I made sure to index my data, and craft my schema and queries carefully for performance, flexibility and maintainability. I also ran stress tests with loader.io, as well as with the k6 suite, to measure performance along the way. I'm hoping to be considered by the merit of my independent open-source work and learning, and how I've helped people use AI tools effectively, in many capacities. At my current employment, I was able to automate much of my workflow, which involves managing a schedule of over 300 people, has helped our operations immensely. This kind of work, that multiplies exponentially in value as people engage with or even indirectly benefit from created tools, is incredibly fulfilling for me, even when I don't receive any kind of compensation or recognition for it.

Sign up now to see more profiles.

Get access to our growing pool of AI developers.