Artificial Intelligence

Exploring LangGraph

Header Image

Written by Klemens Graf

Prototyping with the LangChain framework and Streamlit was initially straightforward. These technologies were game-changers for me at the beginning. However, the experience was bittersweet. Using Streamlit resulted in suboptimal performance, which negatively impacted user experience. Consequently, I evaluated future implementations to enhance the application.

Architectural Considerations

I faced a decision regarding the architectural approach. One option was a monolithic approach using the LangChain JS framework and building everything with NextJS. Alternatively, I could implement a front-end and back-end solution with NextJS in the front-end and the LangChain framework with Python in the back-end.

Initially, I decided to implement the monolithic approach and plan to split the application later during the proof-of-concept phase. However, my research proved inadequate in this scenario. Due to regulatory requirements, we cannot use ChatGPT APIs and must rely on a public cloud provider with more enterprise-friendly contracts and data privacy measures. I was using Claude 3.5 Sonnet from the VertexAI Model Garden since Gemini didn't perform well in my evaluations. Unfortunately, I encountered an issue: the JS framework of LangChain lacked a module equivalent to Python's ChatAnthropicVertex. This was a significant setback.

Transitioning to Multiple Services

Consequently, I opted for a front-end and back-end approach, which proved to be one of the better decisions so far. Developing the two services didn't take long. However, I must admit that I'm writing more tutorial-style code rather than sophisticated production-ready code. Connecting the services was more challenging. Using FastAPI and Pydantic to ensure input types required some time to grasp the principles fully.

Looking back, it's remarkable to see how far I've come, and this is still just the beginning.

This marked another breakthrough for me and the entire project, two months into development.

LangGraph Enters the Picture

I named my services based on their primary functions: Chatter (front-end) and Rester (back-end). After deploying the new iteration of the project, I identified some imperfections in the tool calling. While researching solutions to this bug, I discovered LangGraph, a new addition to my AI-related toolkit.

After studying the documentation and watching tutorials, I created something, but its performance wasn't as good as I had hoped. LangGraph offers a different approach to the agentic AI application I'm building. This was the first real setback in my journey so far. I was stuck for a day or two until I came across a newly uploaded video where LangChain introduced their academy. Studying the course material over the next few days was once again transformative. It covered not only agent concepts but also provided a deep dive into state management and exciting concepts like human in the loop. Afterward, refactoring my back-end codebase was straightforward and yielded better results than I had anticipated.

The Journey Continues

I've made significant progress, but I continue to improve daily. Streaming output to the front-end greatly enhances user experience, but features like managing multiple chats, storing them in a database, and uploading documents for embedding in a vector database are crucial for my project. Next, I'll be focusing on observability and responsible AI development.

Are you interested in a specific AI topic or how I implemented the mentioned features? Reach out on Reddit to share your thoughts, ideas, and perspectives.