Unlocking the Potential of LangChain Expression Language (LCEL): A Hands-On Guide

Pradip NichitePradip Nichite
5 min read

Introduction

LangChain Expression Language (LCEL) represents a transformative approach in working with large language models (LLMs). It simplifies complex workflows, making it more accessible for developers to leverage the power of AI in their applications. In this blog, we'll explore LCEL through practical examples.

Setting the Stage: Basic Setup

Before diving into LCEL, it's crucial to set up the necessary environment. This setup involves installing the LangChain library along with other essential packages:

!pip install langchain
!pip install openai
!pip install chromadb
!pip install tiktoken

Once installed, you can begin coding by importing the required modules and setting up your API keys:

import os
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser

# Set your API key
os.environ['OPENAI_API_KEY'] = "sk-..."

Understanding the Output: Detailed Examples

model = ChatOpenAI()
output_parser = StrOutputParser()
prompt = ChatPromptTemplate.from_template(
    "Create a lively and engaging product description with emojis based on these notes: \n{product_notes}"
)

Let's dive deeper into the output generated at each step of our LCEL example to fully grasp its functionality.

First, we invoke the prompt with specific product notes:

prompt_value = prompt.invoke({"product_notes": "Multi color affordable mobile covers"})

The prompt_value here holds the structured request we sent to the model:

ChatPromptValue(messages=[HumanMessage(content='Create a lively and engaging product description with emojis based on these notes: \nMulti color affordable mobile covers')])

We can convert this value to a string to see how it's presented:

prompt_value.to_string()

This yields a human-readable format of the prompt:

Human: Create a lively and engaging product description with emojis based on these notes:
Multi color affordable mobile covers

To pass this prompt to the model, we convert it to messages:

prompt_value.to_messages()

This conversion is crucial for the model to understand and process the request:

[HumanMessage(content='Create a lively and engaging product description with emojis based on these notes: \nMulti color affordable mobile covers')]

Next, the model is invoked with these messages:

model_output = model.invoke(prompt_value.to_messages())

The model_output contains the AI-generated product description:

AIMessage(content="๐ŸŒˆ๐Ÿ“ฑGet ready to dress up your phone in a kaleidoscope of colors with our multi-color affordable mobile covers! ๐ŸŽ‰๐Ÿ’ƒ...")

Finally, we parse this output:

output_parser.invoke(model_output)

This yields a well-formatted, human-readable product description:

๐ŸŒˆ๐Ÿ“ฑGet ready to dress up your phone in a kaleidoscope of colors with our multi-color affordable mobile covers! ๐ŸŽ‰๐Ÿ’ƒ...

This step-by-step breakdown showcases the power and flexibility of LCEL, illustrating how it handles and transforms data at each stage of the process. The ability to see and understand each component's output is invaluable for debugging and refining your AI-driven applications.

The Power of Chaining in LCEL

One of the most powerful features of LCEL is the ability to chain operations. This capability is showcased in the following example:

chain = prompt | model | output_parser
product_description = chain.invoke({"product_notes": "Multi color affordable mobile covers"})
print(product_description)

The | operator elegantly chains the prompt, model, and output parser, simplifying what would typically be a complex series of operations.

Streaming and Batch Processing

LCEL also supports streaming and batch processing, allowing for efficient handling of multiple inputs and real-time data flows:

# Streaming Example
for chunk in chain.stream({"product_notes": "Multi color affordable mobile covers"}):
    print(chunk, end="", flush=True)

# Batch Processing Example
product_notes_list = [
    {"product_notes": "Eco-friendly reusable water bottles"},
    # Add more product notes here
]
batch_descriptions = chain.batch(product_notes_list)

These examples illustrate LCEL's versatility in handling various types of data inputs and processing needs.

Advanced Use Case: Retrieval Augmented Generation (RAG)

LCEL goes beyond simple chaining. The following RAG example demonstrates its capability in more complex scenarios:

from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma

# Sample documents
docs = ["Document on Climate Change...", "AI in Healthcare..."]

# Retrieval setup
vectorstore = Chroma.from_texts(docs, embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever(search_kwargs={"k": 1})
chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | model | output_parser

# Invoke the chain for a query
response = chain.invoke("Query about social media and politics")
print(response)

Conclusion

In conclusion, LangChain Expression Language (LCEL) presents a flexible and powerful way to work with large language models, allowing for easy composition of complex tasks. As we've seen through the code snippets and explanations, LCEL simplifies the process of generating dynamic content, handling data streams, and performing advanced operations like Retrieval Augmented Generation (RAG).

If you're keen on exploring more about LCEL and would like to see these concepts in action, I highly recommend watching our detailed tutorial video. This video provides a practical, visual guide to using LCEL, complementing the insights shared in this blog post. It's a great resource for both beginners and experienced users looking to deepen their understanding of LangChain and its capabilities.

Whether you're a developer, researcher, or just someone fascinated by the potential of AI and language models, the video will offer valuable insights and enhance your skills in AI-driven application development.

Thank you for reading, and happy coding with LCEL!

Jupyter Notebook: https://github.com/PradipNichite/Youtube-Tutorials/blob/main/LangChain_Expression_Language_(LCEL)_Tutorial.ipynb


If you're curious about the latest in AI technology, I invite you to visit my project, AI Demos, at aidemos.com. It's a rich resource offering a wide array of video demos showcasing the most advanced AI tools. My goal with AI Demos is to educate and illuminate the diverse possibilities of AI.

For even more in-depth exploration, be sure to visit my YouTube channel at youtube.com/@aidemos.futuresmart. Here, you'll find a wealth of content that delves into the exciting future of AI and its various applications.

2
Subscribe to my newsletter

Read articles from Pradip Nichite directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pradip Nichite
Pradip Nichite

๐Ÿš€ I'm a Top Rated Plus NLP freelancer on Upwork with over $100K in earnings and a 100% Job Success rate. This journey began in 2022 after years of enriching experience in the field of Data Science. ๐Ÿ“š Starting my career in 2013 as a Software Developer focusing on backend and API development, I soon pursued my interest in Data Science by earning my M.Tech in IT from IIIT Bangalore, specializing in Data Science (2016 - 2018). ๐Ÿ’ผ Upon graduation, I carved out a path in the industry as a Data Scientist at MiQ (2018 - 2020) and later ascended to the role of Lead Data Scientist at Oracle (2020 - 2022). ๐ŸŒ Inspired by my freelancing success, I founded FutureSmart AI in September 2022. We provide custom AI solutions for clients using the latest models and techniques in NLP. ๐ŸŽฅ In addition, I run AI Demos, a platform aimed at educating people about the latest AI tools through engaging video demonstrations. ๐Ÿงฐ My technical toolbox encompasses: ๐Ÿ”ง Languages: Python, JavaScript, SQL. ๐Ÿงช ML Libraries: PyTorch, Transformers, LangChain. ๐Ÿ” Specialties: Semantic Search, Sentence Transformers, Vector Databases. ๐Ÿ–ฅ๏ธ Web Frameworks: FastAPI, Streamlit, Anvil. โ˜๏ธ Other: AWS, AWS RDS, MySQL. ๐Ÿš€ In the fast-evolving landscape of AI, FutureSmart AI and I stand at the forefront, delivering cutting-edge, custom NLP solutions to clients across various industries.