Artificial Intelligence

Learning Journey with LangChain

Header Image

Written by Klemens Graf

In July, I embarked on building a generative AI application with RAG functionality. Despite not being a software engineer by trade – my experience limited to occasional website creation – I approached also this challenge with a "can-do" attitude which has served me well in achieving ambitious goals in the past.

My Background

My strength lies in IT architecture, both in the cloud and on-premises. In May, I successfully passed the Certified Kubernetes Administrator exam. Utilizing Linux and implementing open-source solutions have always been at the top of my priority list, whether at work or in my homelab.

However, I yearned for more – particularly to contribute to open-source projects. With high ambitions, I sought more coding tasks at work. My Python knowledge proved helpful in extracting data from APIs and performing simple data analyses. While exploring various services in Vertex AI, I stumbled upon new possibilities that would shape my recent journey.

The Birth of an Idea

If you think RAG – short for Retrieval Augmented Generation – was a familiar term to me a few months ago, you'd be mistaken. I started from scratch, without personal connections or a head start. My initial steps were taken in a Jupyter notebook as a side project. It was during this exploration that I first encountered LangChain – at the time, merely a term without deeper meaning to me.

In the following days, I researched beginner-friendly courses on generative AI and its ecosystem. I came across several courses on LangChain, which prompted me to delve deeper into its functionality and potential applications.

Learning Journey

After a profound research, I began with a specifically selected course on Udemy. Within a few days, I had created a working prototype with a Streamlit user interface and a ReAct chain. If these terms are unfamiliar to you, I strongly encourage you to embark on your own learning journey and keep posted for my blog post on LangGraph later on.

To access proprietary content on various topics within the organization I work at, I utilized Pinecone as a vector database. The ReAct chain incorporated Wikipedia, Google, and a Python REPL for calculations. My supervisor was impressed by the prototype, despite its less-than-optimal performance.

By the end of July, I deployed the application on Google Cloud Run and connected to essential services from Vertex AI and Pinecone. Implementing CI/CD with GitHub Actions was a top priority, as I couldn't afford to spend time on manual builds and deployments after every significant change, being the sole developer on this project for the foreseeable future.

Looking Ahead

Stay tuned for the next post on this topic, where I'll cover the fundamental changes to the front-end and back-end, as well as the migration to LangGraph in the back-end. This evolution development promises to bring enhanced performance and scalability to the project, opening up new possibilities for AI-driven applications in our organization.