šŸš€ Getting Started with LangChain: Your Gateway to Building LLM-Powered Applications

Aqsa ShoebAqsa Shoeb
6 min read

A parrot is considered a great pet — of course, for various reasons, but also for its ability to copy humans and speak their language. But do they actually understand it? They can repeat sentences; they may even have their own favorite phrases that they keep saying. People might find it astonishing, but at the end of the day, it's a parrot — it cannot truly converse or connect the way humans do.

Large Language Models are also called stochastic parrots — they’re great at mimicking human language patterns, generating responses that sound intelligent. But they don’t actually "understand" the words they generate. They predict the next best word based on patterns, not meaning.

But when we talk about building powerful applications, we want more than just word prediction. We want the application to understand, analyze, and even take actions. We need some kind of connection — chains?

We now understand the logo and the name: LangChain 🦜 ā›“ļø

Now, let’s get started.

✨ Introduction

Ever stumbled upon LangChain, opened the docs, and thought—"Whoa, this looks powerful... but also a lot"?
Same here.

Hi, I’m Aqsa—a curious developer and AI enthusiast who’s spent the last couple months diving deep into the LangChain ecosystem. And let me tell you—it’s a game-changer if you’re building with large language models. But it also comes with its share of ā€œWait… what does this even do?ā€ moments.

So I decided to break it down—step by step, no jargon, no fluff.
Just detailed, beginner-friendly tutorials to help you go from ā€œWhat is LangChain?ā€ to ā€œWow, I just built that with LangChain!ā€

Okay, okay. I don’t want to exaggerate it, I am still learning a lot of things about it. The same is for LangGraph, and so I just want to share my learnings with you. I would also love to hear your suggestions, doubts, insights, etc. I hope I learn more with you guys :).

This first post is your gateway into the LangChain universe—what it is, why it matters, and how you can start using it to build real magic with LLMs.


🧠 What is LangChain?

LangChain is an open-source framework built to streamline the development of applications powered by large language models (LLMs). Instead of handling prompts, memory, tools, and outputs separately, LangChain offers a modular and composable toolkit to connect everything like Lego blocks.

It’s perfect for:

  • Chatbots

  • Code assistants

  • Data analysis agents

  • Search over custom data

  • Multi-step workflows

If you're working with models from OpenAI, Anthropic, Hugging Face, Cohere, etc.—LangChain gives you the structure and tools to go beyond "just prompting."


šŸš€ Why Use LangChain?

LangChain solves some of the most common challenges when building with LLMs:

āœ… Composable: Want to string together multiple prompts and actions? LangChain lets you do that easily.
āœ… Modular: Swap components like LLMs, retrievers, or memory with minimal changes.
āœ… Scalable: Great for quick prototyping, but built with production-grade apps in mind.
āœ… Tool Integration: Use APIs, databases, search tools, and even code interpreters seamlessly.


🧩 The LangChain Ecosystem

1. LangChain (Core Framework)

The heart of it all. It provides:

  • Prompt templates and formatters

  • Chains (sequential logic for LLMs)

  • Memory (for context retention)

  • Agents (LLMs that use tools based on reasoning)

  • Retrieval-based generation (RAG)

  • Toolkits for APIs, SQL, web scraping, and more

2. LangSmith

LangSmith is your observability, prompt engineering, testing, and evaluation lab for LLM apps.
With LangSmith, you can:

  • Trace executions and debug chains/agents

  • Test different prompts for your application

  • Log interactions and track performance over time

Think of it as Postman + TensorBoard—but for LLM workflows.

3. LangGraph

LangGraph lets you create stateful, multi-step LLM agents using a graph-based architecture.

  • Each node = a function, model, or step in your logic

  • Great for conditional branching, loops, or parallel flows

  • Perfect for complex agent behavior, like multi-agent systems


🌐 Real-World Use Cases

Some amazing things developers are building with LangChain:

  • Chatbots that remember previous conversations

  • Personal AI assistants that schedule meetings or send emails

  • Tools that summarize and search large documents

  • Data science copilots that generate Python code on the fly

  • End-to-end agents that reason, plan, and act autonomously


LangChain Architecture

Now under this, we have already covered introduction to LangGraph and LangSmith. So, let’s move on to understand what packages are available.

  • LangChain-core: Under this, you will find interfaces for core components and abstractions. For example, there’s module called prompts. This is the module where LangChain defines and handles all types of prompts: from simple strings to structured multi-part pipelines. It also includes other modules like memory, outputs, embedding, vector stores, etc. This package can be considered as the foundation of langChain. LangChain-core and Langchain (main module) do not have any third party integrations defined.

  • LangChain: This is where you’ll find chains and retrieval strategies that form the cognitive architecture of any LangChain application. It includes core components like chains, agents, and RAG setups that are not tied to any specific tool or integration. Everything here is generic and works across different backends. Like langchain-core, this package does not include third-party integrations—it simply defines how the logic flows in your app.

    Note: Both langchain-core and the main langchain package are model-agnostic and integration-agnostic. This means you don’t need to modify these packages when switching between different LLM providers (like OpenAI, Anthropic, or Hugging Face) or vector databases. They define the interfaces, chains, and logic flows in a generic way—so your application logic stays the same, even if the underlying tools change. Only the integration packages (like langchain-openai, langchain-anthropic, etc.) need to be updated when you switch services.

  • Integration Packages: These are the plug-and-play packages that connect LangChain to external services like OpenAI, Cohere, or Pinecone. For example, langchain-openai provides access to OpenAI’s models like ChatOpenAI and OpenAIEmbeddings. These packages implement standard interfaces defined in langchain-core, so you can easily switch providers without changing your app’s core logic.

  • LangChain-Community: This package includes third-party integrations that are maintained by the LangChain community. It covers a wide range of components like chat models, vector stores, tools, and more. Unlike key integration packages (like langchain-openai), which are maintained separately, this one bundles multiple smaller integrations together. All dependencies here are optional, so the package stays lightweight and doesn't bloat your environment unless you explicitly install what's needed.

šŸ”® What's Next?

This post is just the beginning. Coming up, I’ll be diving into:

  • Tool/Function calling in LangChain

  • Structured Output in LangChain

  • Chains and LCEL

  • Vector Stores

  • Retrievers

  • And so much more!


šŸ™Œ Let’s Build Together

If this excites you, hit that Follow button so you don’t miss future tutorials.
Have a question, idea, or cool use case? Drop it in the comments below or ping me on [your preferred contact].

I’m building this series for learners and tinkerers like you—so let’s explore the LangChain universe, one post at a time. šŸš€

0
Subscribe to my newsletter

Read articles from Aqsa Shoeb directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aqsa Shoeb
Aqsa Shoeb