A guide to AI Agents: Exploring Tools, Connectors and Their Interactions


Introduction
The year 2025 is shaping up to be the year of agents and there’s no surprise why. AI has evolved from just basic back-and-forth question answering to autonomously making decisions and solving real-world business use cases.
Tvara started with a simple goal:
Make it easy for developers to quickly build, deploy and scale agents and agentic workflows within their codebases. (And yes, we’re working on a super cool interactive playground for non-devs too. Stay tuned!)
So for our debut article, let’s warm up by exploring what agents really are and more importantly, where they get their data from.
Agents
We’ve all been using LLMs almost daily ever since ChatGPT’s launch in November 2022.
If you ask (or rather, asked, before tools became a norm for ChatGPT and others) an LLM for today’s date, it would probably reply back with its last knowledge cutoff’s date. Try it on any locally deployed model such as DeepSeek or Qwen and you’ll still see this in action!
The core issue with LLMs is this:
They’re great at reasoning over what they already know (from training), but they can’t perform real-time tasks or access live data on their own.
This is exactly where the concept of agents and tools comes in.
Agents are LLMs that are augmented with external tools and APIs (what we here call connectors) to make smarter, real-world driven decisions. This allows them to perform a web-search, query your calendar, hit an external API or fetch fresh data from a database - all on-the-go.
So an agent can be visually understood from the following diagram:
This agent is now tailored for a specific task in-hand to power your workflow! But wait, it wont solve a business use-case yet. Actual scenarios demand much more than single tasks and optimal collaboration among players to reach a shared goal.
That’s where multi-agent workflows come in and we’ll get there soon.
Tools
Tools are lightweight functions that help the agent perform focused tasks by augmenting the LLM with specific capabilities.
Think of them as single-purpose utilities. They don’t rely on prior context and are easy to plug in and out.
Some common examples of tools:
A calculator to perform math
A web search tool to pull up live results
A weather checker to fetch current forecasts
A code executor for small snippets or validation
In Tvara, tools are designed to be simple Python functions or classes that can be reused across multiple agents. You are free to define your custom tools as well!
TL;DR: If your LLM just needs one quick skill to do its job, it's a tool.
Connectors
Connectors are integrations with external systems or SaaS platforms that expose richer APIs and multiple functions. While tools are “skills,” connectors are more like portals into real-world data ecosystems.
Examples include:
Google Calendar: fetch events, create meetings, check availability
Slack: send messages, read channels, trigger workflows
GitHub: fetch issues, pull requests, codebase info
Jira, Notion, Airtable, Trello, etc.
Connectors often involve authentication, multiple endpoints, and may even maintain state or session context depending on how the agent uses them.
In Tvara, you can either use prebuilt connectors or easily define your own using the base connector class.
TL;DR: If your agent needs to talk to the outside world in a structured, multi-functional way, it's a connector.
In summary,
Feature | Tool | Connector |
Purpose | One-off, focused function | Rich, multi-endpoint integration |
Examples | Calculator, Search | Slack, GitHub, Google Calendar |
Complexity | Low | Moderate–High |
Stateful? | Stateless | May require sessions |
Dev Effort | Simple function | Usually requires API client |
How do they all work together?
A major question that could arise:
How does the LLM know when to call a tool or connector, how to invoke it, and what to do with the result?
Let’s break it down.
The Agent Loop
Think of a user input like this:
"Check if I have any meetings this afternoon and then tell me if I have time to go for a walk."
Here’s what happens within the loop:
Input Parsing
The user prompt is passed to the LLM with some context (system prompt, history, available tools/connectors).Tool Decision
The LLM, based on its prompt and few-shot examples, decides:“I need to check Google Calendar to answer this.”
Tool Invocation
The agent framework captures the tool call/connector call, pulls out the arguments (like today's date), and executes the corresponding tool or connector.Result Handling
Once the tool/connector returns a result (e.g., list of calendar events), that output is fed back to the LLM.Final Response Generation
Now with live data in hand, the LLM composes a final reply like:“You have meetings till 4 PM. You’ll be free for a walk around 4:30.”
How Tvara does this
Tvara follows a lightweight design:
You register tools and connectors by subclassing the BaseTool or BaseConnector.
The agent generates a JSON if it feels a tool or a connector needs to be invoked.
A python function parses this JSON, detects the tool/connector, and runs it to get its result back to the agent.
You don’t need to write complex handlers or chains, it just works!
Here’s a simplified code peek (We’ll go in-depth of our Tvara SDK in the next post, we promise!)
Tool Definition
import datetime
from .base import BaseTool
class DateTool(BaseTool):
def __init__(self):
super().__init__(name="date_tool", description="Returns the current date.")
def run(self, input_data: str) -> str:
return f"Today's date is {datetime.date.today()}. Today's day is {datetime.date.today().strftime('%A')}. The current time is {datetime.datetime.now().strftime('%H:%M:%S')}."
Tool Usage
from tvara.core import Agent
from tvara.tools import DateTool
my_agent = Agent(
name="Good Agent",
model="gemini-2.5-flash",
api_key=os.getenv("MODEL_API_KEY"),
tools=[DateTool()],
)
my_agent.run("What would be the day of the week 13 days from now?")
Under the hood, Tvara parses the LLM’s output, calls the
run
method ofDateTool
, and feeds the result back to the LLM.
What’s next?
Now that we clearly understand what a single agent and how it works along with tools and connectors, the next step is unlocking collaboration.
Because let’s be honest, real-world workflows rarely involve just one task or one decision-maker.
In our next post, we’ll explore how multiple agents can work together, share context, and complete complex tasks as a team, just like humans do.
Stay tuned! 👀
Subscribe to my newsletter
Read articles from Ashish Lal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
