TIL about LangChain's RunnableLike objects
While reading through the langchain_core.runnable.base
code, the ability to create Runnable
objects out of RunnableLike
objects caught my eye.
This article is also viewable at https://github.com/WarrenTheRabbit/TIL/.
What is LangChain?
LangChain is a framework for developing applications powered by language models.
What is langchain_core
?
Before the release of LangChain 0.1, there was no langchain_core
package. Instead, there was just a monolithic langchain
package.
Since the 0.1 release, langchain
consists of three packages:
Package | Description |
langchain | higher-level, cognitive architecture style components such as use-case specific chains, agents and retrieval algorithms |
langchain_core | base abstractions for the higher-level components and runtime for the LangChain Expression Language |
langchain_community | third party integrations |
What are Runnable objects?
Runnable
objects are functions
From one perspective, a Runnable
is a function: input goes in and input comes out.
LangChain types Runnable
objects in this way too:
class Runnable(Generic[Input, Output], ABC):
...
Runnable
objects are protocols
But a Runnable
is also a protocol in the LangChain ecosystem. This is an important perspective to keep in mind. Runnable
objects are the lifeblood of the LangChain ecosystem.
From an ecosystem point of view, a Runnable
is a unit of work that can be invoked, batched, streamed, transformed and composed.
In other words, Runnable
objects expose methods that a) the LangChain ecosystem does work with and which b) client code can use to leverage aspects of the LangChain ecosystem.
Consequently, a Runnable
object's methods can be used to modify how the core execution logic within the Runnable
receives and outputs data, as well as how it behaves and intersects with the LangChain ecosystem. For example:
using batch execution
with retry policies
with lifecycle listeners
executed declaratively (the LangChain Expression Language)
executed imperatively (called directly with
component.invoke(...)
)parsing the output
But I know I have only seen the tip of the iceberg.
Exploring RunnableLike objects
RunnableLike
objects can become Runnable
objects
If an object is RunnableLike
, it can augmented by the Runnable
protocol.
As the typing definitions show, RunnableLike
objects must share the same input-output structure as Runnable
objects:
RunnableLike = Union[
Runnable[Input, Output],
Callable[[Input], Output],
Callable[[Input], Awaitable[Output]],
Callable[[Iterator[Input]], Iterator[Output]],
Callable[[AsyncIterator[Input]], AsyncIterator[Output]],
Mapping[str, Any],
]
To create a Runnable
object from a RunnableLike
object, you can cast it directly with an appropriate constructor or you can delegate to LangChain chain. LangChain refers to the delegated approach as 'coercion':
def coerce_to_runnable(thing: RunnableLike) -> Runnable[Input, Output]:
"""Coerce a runnable-like object into a Runnable.
Args:
thing: A runnable-like object.
Returns:
A Runnable.
"""
if isinstance(thing, Runnable):
return thing
elif inspect.isasyncgenfunction(thing) or inspect.isgeneratorfunction(thing):
return RunnableGenerator(thing)
elif callable(thing):
return RunnableLambda(cast(Callable[[Input], Output], thing))
elif isinstance(thing, dict):
return cast(Runnable[Input, Output], RunnableParallel(thing))
else:
raise TypeError(
f"Expected a Runnable, callable or dict."
f"Instead got an unsupported type: {type(thing)}"
)
For example, a Callable
'thing' with the correct input-output behaviours is a RunnableLike
. Therefore, it can be coerced to a RunnableLambda
. When it is, it is augmented with the Runnable
protocol's toolkit and gains access to its mechanism of use (imperative and declarative).
Using the coerced RunnableLike
Take for example a simple add_one
function. It can be made a Runnable
:
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
add_runnable = RunnableLambda(add_one)
Now I can mediate my use of add_one
through the Runnable
interface.
This means I can use the Runnable
protocol to modify, invoke, compose, trigger hooks, etc on top of add_one
's core execution logic.
I can call it imperatively with an input:
>>> add_runnable.invoke(2)
3
I can batch call it with a list of inputs:
>>> add_runnable.batch([2, 3, 4])
[3, 4, 5]
I can add event listeners on start, end and error:
>>> (add_runnable
.with_listeners(on_start=lambda _: print("Starting..."))
.invoke(2))
Starting...
3
If my Runnable
is something that can fail and I want it to try again on failure, I can add a retry policy.
Let's demonstrate a retry policy by decorating add_one
so that it fails twice before succeeding:
def fail_twice_before_success(func):
attempts = 0
def inner(*args, **kwargs):
nonlocal attempts
print(f"Attempt {attempts + 1}")
if attempts < 2:
attempts += 1
raise Exception("Failed")
attempts = 0
print("Success!")
return func(*args, **kwargs)
return inner
fail_twice_before_adding_one = fail_twice_before_success(add_one)
Now let's create a Runnable
with a retry policy:
>>> fragile_runnable = (RunnableLambd(fail_twice_before_adding_one)
.with_retry_policy(max_attempts=3))
>>> fragile_runnable.invoke(2)
Attempt 1
Attempt 2
Attempt 3
Success!
3
Because it is augmented by the Runnable
interface, I can include add_one
in the LangChain Expression Language, which is a declarative way to compose Runnable
objects into a chain.
>>> (add_runnable | add_runnable).invoke(0)
2
I can add configurations, such as configuring it with a basic logger that hooks into LangChain's callback system:
>>> handler = StdOutCallbackHandler()
>>> config = {"callbacks": [handler]}
>>> ((add_runnable | add_runnable)
.with_config(config)
.invoke(0))
> Entering new RunnableSequence chain...
> Entering new RunnableLambda chain...
> Finished chain.
> Entering new RunnableLambda chain...
> Finished chain.
> Finished chain.
5
Subscribe to my newsletter
Read articles from Warren Markham directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Warren Markham
Warren Markham
I've worked in an AWS Data Engineer role at Infosys, Australia. Previously, I was a Disability Support Worker. I recently completed a course of study in C and Systems Programming with Holberton School Australia. I'm interested in collaborative workflows and going deeper into TDD, automation and distributed systems.