š Getting Started with LangChain: Your Gateway to Building LLM-Powered Applications


š¦ āļøThe Logo
A parrot is considered a great pet ā of course, for various reasons, but also for its ability to copy humans and speak their language. But do they actually understand it? They can repeat sentences; they may even have their own favorite phrases that they keep saying. People might find it astonishing, but at the end of the day, it's a parrot ā it cannot truly converse or connect the way humans do.
Large Language Models are also called stochastic parrots ā theyāre great at mimicking human language patterns, generating responses that sound intelligent. But they donāt actually "understand" the words they generate. They predict the next best word based on patterns, not meaning.
But when we talk about building powerful applications, we want more than just word prediction. We want the application to understand, analyze, and even take actions. We need some kind of connection ā chains?
We now understand the logo and the name: LangChain š¦ āļø
Now, letās get started.
⨠Introduction
Ever stumbled upon LangChain, opened the docs, and thoughtā"Whoa, this looks powerful... but also a lot"?
Same here.
Hi, Iām Aqsaāa curious developer and AI enthusiast whoās spent the last couple months diving deep into the LangChain ecosystem. And let me tell youāitās a game-changer if youāre building with large language models. But it also comes with its share of āWait⦠what does this even do?ā moments.
So I decided to break it downāstep by step, no jargon, no fluff.
Just detailed, beginner-friendly tutorials to help you go from āWhat is LangChain?ā to āWow, I just built that with LangChain!ā
Okay, okay. I donāt want to exaggerate it, I am still learning a lot of things about it. The same is for LangGraph, and so I just want to share my learnings with you. I would also love to hear your suggestions, doubts, insights, etc. I hope I learn more with you guys :).
This first post is your gateway into the LangChain universeāwhat it is, why it matters, and how you can start using it to build real magic with LLMs.
š§ What is LangChain?
LangChain is an open-source framework built to streamline the development of applications powered by large language models (LLMs). Instead of handling prompts, memory, tools, and outputs separately, LangChain offers a modular and composable toolkit to connect everything like Lego blocks.
Itās perfect for:
Chatbots
Code assistants
Data analysis agents
Search over custom data
Multi-step workflows
If you're working with models from OpenAI, Anthropic, Hugging Face, Cohere, etc.āLangChain gives you the structure and tools to go beyond "just prompting."
š Why Use LangChain?
LangChain solves some of the most common challenges when building with LLMs:
ā
Composable: Want to string together multiple prompts and actions? LangChain lets you do that easily.
ā
Modular: Swap components like LLMs, retrievers, or memory with minimal changes.
ā
Scalable: Great for quick prototyping, but built with production-grade apps in mind.
ā
Tool Integration: Use APIs, databases, search tools, and even code interpreters seamlessly.
š§© The LangChain Ecosystem
1. LangChain (Core Framework)
The heart of it all. It provides:
Prompt templates and formatters
Chains (sequential logic for LLMs)
Memory (for context retention)
Agents (LLMs that use tools based on reasoning)
Retrieval-based generation (RAG)
Toolkits for APIs, SQL, web scraping, and more
2. LangSmith
LangSmith is your observability, prompt engineering, testing, and evaluation lab for LLM apps.
With LangSmith, you can:
Trace executions and debug chains/agents
Test different prompts for your application
Log interactions and track performance over time
Think of it as Postman + TensorBoardābut for LLM workflows.
3. LangGraph
LangGraph lets you create stateful, multi-step LLM agents using a graph-based architecture.
Each node = a function, model, or step in your logic
Great for conditional branching, loops, or parallel flows
Perfect for complex agent behavior, like multi-agent systems
š Real-World Use Cases
Some amazing things developers are building with LangChain:
Chatbots that remember previous conversations
Personal AI assistants that schedule meetings or send emails
Tools that summarize and search large documents
Data science copilots that generate Python code on the fly
End-to-end agents that reason, plan, and act autonomously
LangChain Architecture
Now under this, we have already covered introduction to LangGraph and LangSmith. So, letās move on to understand what packages are available.
LangChain-core: Under this, you will find interfaces for core components and abstractions. For example, thereās module called prompts. This is the module where LangChain defines and handles all types of prompts: from simple strings to structured multi-part pipelines. It also includes other modules like memory, outputs, embedding, vector stores, etc. This package can be considered as the foundation of langChain.
LangChain-core
andLangchain
(main module) do not have any third party integrations defined.LangChain: This is where youāll find chains and retrieval strategies that form the cognitive architecture of any LangChain application. It includes core components like chains, agents, and RAG setups that are not tied to any specific tool or integration. Everything here is generic and works across different backends. Like langchain-core, this package does not include third-party integrationsāit simply defines how the logic flows in your app.
Note: Both
langchain-core
and the mainlangchain
package are model-agnostic and integration-agnostic. This means you donāt need to modify these packages when switching between different LLM providers (like OpenAI, Anthropic, or Hugging Face) or vector databases. They define the interfaces, chains, and logic flows in a generic wayāso your application logic stays the same, even if the underlying tools change. Only the integration packages (likelangchain-openai
,langchain-anthropic
, etc.) need to be updated when you switch services.Integration Packages: These are the plug-and-play packages that connect LangChain to external services like OpenAI, Cohere, or Pinecone. For example,
langchain-openai
provides access to OpenAIās models likeChatOpenAI
andOpenAIEmbeddings
. These packages implement standard interfaces defined inlangchain-core
, so you can easily switch providers without changing your appās core logic.LangChain-Community: This package includes third-party integrations that are maintained by the LangChain community. It covers a wide range of components like chat models, vector stores, tools, and more. Unlike key integration packages (like
langchain-openai
), which are maintained separately, this one bundles multiple smaller integrations together. All dependencies here are optional, so the package stays lightweight and doesn't bloat your environment unless you explicitly install what's needed.
š® What's Next?
This post is just the beginning. Coming up, Iāll be diving into:
Tool/Function calling in LangChain
Structured Output in LangChain
Chains and LCEL
Vector Stores
Retrievers
And so much more!
š Letās Build Together
If this excites you, hit that Follow button so you donāt miss future tutorials.
Have a question, idea, or cool use case? Drop it in the comments below or ping me on [your preferred contact].
Iām building this series for learners and tinkerers like youāso letās explore the LangChain universe, one post at a time. š
Subscribe to my newsletter
Read articles from Aqsa Shoeb directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
