Building AI Agents With LangChain Part II: Understanding Tools

Osahon OboiteOsahon Oboite
3 min read

Previously, we looked at AI agents, including ReAct agents, and introduced LangChain. In this lesson, we will focus on one of LangChain’s most powerful features by looking at what tools are, how they work, and how to use them.

From the previous tutorial, we saw the following piece of code:

On line 15 we defined the list of tools for our agent.
When we ran npm tsx agent.ts our AI agent used a provided tool, TavilySearchResults to search the internet for the latest weather report.

That’s what tools are — utility functions for your AI agent.

In LangChain, tools are functions with a well-defined input and output interface that can be called by agents or chat models (that support tool calling) to perform specific, often external, tasks such as API calls, calculations, or database queries.

How to Create a Tool in LangChain

LangChain provides a tool wrapper function (or a factory, if you like) for creating tools.
This factory method takes a function as the first parameter and a config object as the second.

Let’s take a closer look at the config object

  • schema: LangChain supports schema definition with zod, providing information about the shape of the tool’s input to the chat model so that it can be called properly.

  • name: a continuous string (no space or special characters) used to reference the tool.

  • description: optional but highly recommended. The description provides more details about the tool, enabling the chat model to know when to call the tool.

Other parameters, not covered in this chapter, but surely in subsequent articles, include:

  • callbacks: a list of callback event handlers

  • metadata: pass additional data to your tool that can be used for logging, monitoring, tool selection logic, etc.

  • responseFormat: content | content_and_artifact The default is content. If content_and_artifact is passed, then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage. (Learn more here)

  • returnDirect: When set to true, the agent would return the tool’s response as-is, skipping further reasoning. The default is false

  • tags: an array of strings used for organization purposes without altering the tool’s behaviour

  • verbose: when set to true, it would log the input, output and step-by-step execution of the tool.

  • verboseParsingErrors: similar to verbose, but for errors within the tool.

Tools usage

Tools can be called inside or outside the context of a chat model.

The invoke method of a tool allows you to call it like you would a regular function, like so:

multiplicationTool.invoke({ a: 2, b: 3}) // 6

However, within the context of a chat model that supports the tool calling API, the chat model knows the right tool to use using the description of the tool.

Let us use our multiplication tool.

Update your agent.ts file to look like so:

💡

Running this code should yield the following:

[tool/start] [1:tool:tools] Entering Tool run with input: "{"a":4,"b":9}"
[tool/end] [1:tool:tools] [3ms] Exiting Tool run with output: "{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain_core",
    "messages",
    "ToolMessage"
  ],
  "kwargs": {
    "content": "36",
    "tool_call_id": "call_RxF7OgbDutBPpK9HZHvaCOQB",
    "name": "multiplication_tool",
    "additional_kwargs": {},
    "response_metadata": {}
  }
}"
The result of multiplying 4 and 9 is 36.

As you can see, the chat model used our tool.

Go ahead, play around and try to break things!

Now that you are familiar with tools, in the next section, we will take a look at memories and how to make your AI agent aware of previous conversations, improving performance.

Until next time!

0
Subscribe to my newsletter

Read articles from Osahon Oboite directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Osahon Oboite
Osahon Oboite