A.I. and Web Crawling
I wanted to try out bedrock with a functionality where I could hold a context of a webpage and ask the LLM questions on it.
But with a challenge: To develop all of this in a week (around 1-2 hrs daily).
Thought about a use case for laws and how an individual could reference a law document and ask questions related to that. There were PDF-chat apps, so I thought a web version of it would be great where you could reference multiple publicly available law docs and have your questions answered. The REST backend would be Python (usingFastAPI) and the frontend in React + Vite as this would be a quick project without any additional routes or server rendered pages, but only a chat interface and a feature to add webpages as context. (Next.js would've been an overkill)
First off, I needed to choose a foundational model suited for my use-case i.e with a large context limit. Could've went with a RAG implementation and a smaller context model, but that was not the objective. The highest I could find was Cohere's Command R. Turns out there was no native chat system API support for this model (or i thought so), so to implement the chat, I decided on Langchain. To store the chat history, DynamoDB was the fastest to implement. Could've stored the history in the chrome's local_storage
but that wouldn't be feasible if the history gets huge. For the UI part, I used shadcn/ui (the best thing ever).
Other than these specifics, I wanted the chat data to be temporary i.e the chat history won't persist if you exit the current session (reload, close tab, etc.) and to implement this I added functionality to generate a temporary user id for the client on every page load event under which DynamoDB data will be stored. How will I tackle the stale, unreachable data in Dynamo? Using the TTL (Time To Live) attribute in the database.
Connecting Langchain and Command R was tedious. The errors suggested that Command R's chat system was not properly implemented in Langchain. Trying to fix these, a *surprise came upon -*the AWS docs show Command R does have a chat history param in the API. Great. For webpage referencing, whenever the user would add a URL, python's beautifulsoup4
would get the textual data from the page, clean it and add it to the chat history (context) in a format that the LLM would think it is a document that is being referenced, rather than taking as a user message.
The week was almost over (a day left) and the final boss was staring at me - deployment. Well, not really a final boss 'cause I had done the hard part of it during NoBurnCloud. I wonder why deploying frontends is comparatively easier (Vercel, Netlify) than deploying secure backends (Digital Ocean, AWS EC2). Setting up SSL is quite a time-taking repetitive process taking into account that I have to setup a server block to configure a NGINX reverse proxy and install certbot (though pip).
Need a tool to automate this for me.
Checkout the finished project here: MajorLaw
Subscribe to my newsletter
Read articles from Param Birje directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by