Agentic AI - A practical guide for Node.js developers

Table of contents
- What is agentic AI?
- Agents vs LLMs & what’s the real difference?
- Agentic workflow - Step-by-step in human words
- What are “agents” and “tools” ?
- How agents choose and use tools (simple flow)
- Build your own tools using Node.js
- Minimal agent orchestrator (Node.js) - a reliable pattern
- Starter example - weather + calculator
- Market landscape - what teams are building today (short survey)
- Safety, governance, and practical cautions (don’t skip these)
- Final takeaways - What to do next (for you as a Node.js developer)

Short version: Agentic AI are systems that plan, decide, and act (not just talk). They combine an LLM “brain” with memory, tools, and a workflow engine so the system can complete multi-step tasks with little human help.
This article explains the idea in plain language, describes agents and tools, shows how to build simple tools in Node.js, gives a minimal agent example you can run today, surveys the current market landscape, and ends with clear takeaways you can use right away.
What is agentic AI?
Agentic AI means AI systems that have agency: they take actions to achieve a goal instead of only giving text responses. An agentic system will:
Plan what to do,
Call tools or APIs to get data or perform actions,
Check results, and
Iterate until the task is done or a human steps in.
In practice, an agentic system mixes a powerful language model (the reasoning component) with programmatic tools and orchestration so it can act in the world fetching data, updating a database, sending messages, or calling other services.
Agents vs LLMs & what’s the real difference?
LLM (Large Language Model): a powerful text generator / reasoner. It understands and writes text (answers, summaries, code).
Agent: an LLM plus infrastructure: memory, planners, tool interfaces, guards, and workflows.
Think of the LLM as the brain and the agent as the brain plus hands and tools. The LLM supplies reasoning; the agent supplies action.
Agentic workflow - Step-by-step in human words
Agentic workflow is the loop that an agent uses to break a goal into steps and execute them. A typical loop:
Receive goal (user request).
Plan — create sub-tasks (research, fetch data, compute).
Decide which tool(s) are needed.
Call tool(s) (APIs, DB, search, send email).
Evaluate results and update the plan.
Repeat / finalize / ask human.
Example:
“Find the cheapest train from Mumbai to Pune tomorrow and book it.”
The agent searches schedules → compares prices → asks for confirmation → calls booking API → sends the ticket. This loop of planning, acting, and checking is the essence of agentic behaviour.
What are “agents” and “tools” ?
Agent
A software process that holds a goal and coordinates actions.
It contains: a reasoning component (LLM), a memory store (short or long-term), a planner (simple or complex), tools (functions/APIs), and safety/guard rails.
Tools
Tools are the agent’s abilities - code functions or APIs that do things.
Examples:
getWeather(city)
,searchDocs(query)
,runShell(cmd)
,createCalendarEvent(details)
,sendEmail(to,subject,body)
.Tools are not the model. They are honest code you write and trust to do specific jobs.
Why separate them? The LLM is great at planning and natural language; it shouldn’t be the thing that actually executes a bank transfer or runs database migrations tools are controlled, auditable code that the agent calls. Huggingface (best for understanding about tools and agents)
How agents choose and use tools (simple flow)
The agent reasons with the LLM: “To reach this goal, I need tool X.”
The agent formats a tool call (structured JSON or function call).
The orchestrator executes the tool and returns the result to the LLM.
The LLM updates the plan given the tool output.
Many frameworks formalize that sequence so you don’t have to build it from scratch.
Below are two simple, copy-pasteable Node.js snippets that demonstrate the exact flow:
(A) a quick local mock for learning, and
(B) a real example using the OpenAI chat completions endpoint.
(A) Quick mock/demo (no API key needed)
This version simulates the LLM so you can run everything locally and see the step-by-step flow.
tools.js
// tools.js
export const tools = {
getWeather: async ({ city }) => ({ city, tempC: 30, condition: "Sunny" }),
add: ({ a, b }) => ({ result: Number(a) + Number(b) })
};
mockLLM.js
// mockLLM.js - simulates LLM decisions
export async function mockLLM(messages) {
// If no TOOL_RESULT yet, request getWeather
if (!messages.some(m => m.content?.startsWith("TOOL_RESULT"))) {
return JSON.stringify({ tool: "getWeather", args: { city: "Bangalore" } });
}
// After the weather tool returned, request add
if (messages.some(m => m.content?.startsWith("TOOL_RESULT")) &&
!messages.some(m => /"final":/.test(m.content || ""))) {
return JSON.stringify({ tool: "add", args: { a: 12, b: 30 } });
}
// Finally return final answer
return JSON.stringify({ final: "Weather fetched and sum computed. Task complete." });
}
orchestrator.js
// orchestrator.js
import { tools } from "./tools.js";
import { mockLLM } from "./mockLLM.js";
export async function runAgent(userPrompt) {
const system = {
role: "system",
content: "When calling tools, return JSON: {tool:'name', args:{}};
final output: {final:'text'}"
};
let messages = [system, { role: "user", content: userPrompt }];
for (let i = 0; i < 6; i++) {
const llmReply = await mockLLM(messages); // 1) Agent reasons with LLM
let parsed;
try { parsed = JSON.parse(llmReply); } catch { parsed = null; }
if (parsed?.tool) { // 2) Tool-call formatted as JSON
const fn = tools[parsed.tool];
const result = fn ? await fn(parsed.args || {}) : { error: "unknown tool" };
messages.push({
role: "assistant",
content: `TOOL_RESULT ${JSON.stringify(result)}`
}); // 3) Orchestrator returns tool result
continue; // allow LLM to update plan
}
if (parsed?.final) return parsed.final; // 4) LLM returns final
return llmReply;
}
return "iteration limit reached";
}
// Usage
(async () => {
console.log(await runAgent("Get weather and compute 12+30"));
})();
(B) Real example using OpenAI Chat Completions (API key required)
This uses the same JSON-as-response contract (no need to use the function-calling feature). Replace OPENAI_API_KEY
with your key and set the model to one you have access to.
orchestrator-openai.js
import fetch from "node-fetch";
import { tools } from "./tools.js";
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const OPENAI_URL = "https://api.openai.com/v1/chat/completions";
async function callLLM(messages) {
const res = await fetch(OPENAI_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${OPENAI_API_KEY}`
},
body: JSON.stringify({ model: "gpt-4o-mini", messages, max_tokens: 800 })
});
const data = await res.json();
return data.choices?.[0]?.message?.content || "";
}
export async function runAgent(userPrompt) {
const system = {
role: "system",
content: `You're an agent. To call a tool, reply ONLY with JSON: {"tool":"TOOL_NAME","args":{...}}.
When done reply: {"final":"..."}`
};
let messages = [system, { role: "user", content: userPrompt }];
for (let i = 0; i < 8; i++) {
const reply = await callLLM(messages); // 1) Agent reasons with LLM
let parsed;
try { parsed = JSON.parse(reply); } catch { parsed = null; }
if (parsed?.tool) { // 2) Tool-call formatted as JSON
const toolFn = tools[parsed.tool];
if (!toolFn) {
messages.push({ role: "assistant", content: `ERROR: unknown tool ${parsed.tool}` });
break;
}
const toolResult = await toolFn(parsed.args || {}); // 3) Orchestrator executes tool
messages.push({ role: "assistant", content: `TOOL_RESULT ${JSON.stringify(toolResult)}` });
continue; // LLM sees tool output and updates plan
}
if (parsed?.final) return parsed.final; // 4) LLM returns final plan/result
return reply; // fallback: plain text
}
return "Stopped: reached loop limit";
}
// Usage: node -r dotenv/config orchestrator-openai.js
(async () => {
const out = await runAgent("Find weather for Bangalore and then compute 12+30 for me.");
console.log("Agent output:", out);
})();
Quick notes & tips
Use structured JSON as the contract between LLM and orchestrator.
Validate
parsed.tool
andparsed.args
before executing any tool.Return tool results with a deterministic prefix (e.g.,
TOOL_RESULT {...}
) so the model can parse them reliably.Add loop limits, logging, and error handling to avoid runaway agents.
Build your own tools using Node.js
1. Design a tool API contract
Name, input schema, and return schema. Keep outputs deterministic and structured (JSON).
2. Write the tool
tools.js
// tools.js
export const tools = {
getWeather: async ({ city }) => {
// Simple mock - replace with real API call
// e.g. fetch(`https://api.weather.example?city=${encodeURIComponent(city)}`)
return { city, tempC: 32, condition: "Sunny" };
},
add: ({ a, b }) => {
return { result: Number(a) + Number(b) };
}
};
3. Keep tools isolated & auditable
- Log tool calls, validate inputs, and add rate-limits or auth for sensitive tools.
4. Test tools independently
Before connecting the LLM, make sure each tool works and returns predictable JSON.
Minimal agent orchestrator (Node.js) - a reliable pattern
This pattern works without a special function-calling API. You instruct the LLM to return structured JSON when it needs a tool. Your orchestrator executes the tool and feeds the result back to the LLM.
orchestrator.js
// orchestrator.js (requires node-fetch or global fetch)
import fetch from "node-fetch";
import { tools } from "./tools.js";
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const OPENAI_URL = "https://api.openai.com/v1/chat/completions";
async function callLLM(messages) {
const res = await fetch(OPENAI_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${OPENAI_API_KEY}`
},
body: JSON.stringify({
model: "gpt-4o-mini", // replace per your access
messages
})
});
const data = await res.json();
return data.choices?.[0]?.message?.content || "";
}
export async function runAgent(userPrompt) {
const systemMessage = {
role: "system",
content: `You are an agent. When you want to call a tool, reply ONLY with JSON like:\n{"tool":"TOOL_NAME","args":{...}}\nWhen you are done, reply with {"final": "text answer"}`
};
let messages = [systemMessage, { role: "user", content: userPrompt }];
for (let i = 0; i < 6; i++) { // loop with a safety bound
const reply = await callLLM(messages);
// try parse JSON
let parsed;
try { parsed = JSON.parse(reply); } catch (e) { parsed = null; }
if (parsed?.tool) {
const toolFn = tools[parsed.tool];
if (!toolFn) {
messages.push({ role: "assistant", content: `ERROR: unknown tool ${parsed.tool}` });
break;
}
const toolResult = await toolFn(parsed.args || {});
messages.push({ role: "assistant", content: `TOOL_RESULT ${JSON.stringify(toolResult)}` });
// continue loop so LLM can see tool result and plan next step
continue;
}
if (parsed?.final) {
return parsed.final;
}
// fallback: if not JSON, treat as final text
return reply;
}
return "Agent stopped: iteration limit reached.";
}
This structure works with any LLM that can follow instructions, you don’t need a proprietary function-calling feature to get started.
Starter example - weather + calculator
Add the
tools.js
(weather + add) andorchestrator.js
above.Run this script:
import { runAgent } from "./orchestrator.js";
(async () => {
const res = await runAgent("Find current weather in Bangalore and add 12 + 30.");
console.log("Agent result:", res);
})();
The LLM should request getWeather
then add
, the orchestrator will run those tools and feed results back until it provides an answer.
Market landscape - what teams are building today (short survey)
The industry is moving from simple model calls to agentic systems that integrate with workflows and tools. Popular frameworks and SDKs that many teams use today include:
LangChain - building blocks for agents, memory, and tool orchestration.
Microsoft AutoGen / Semantic Kernel - multi-agent coordination and production-grade orchestration for complex workflows.
Vercel AI SDK - easy agent features for web apps and serverless environments. Link
When to use a framework: once your prototype needs memory, multi-step orchestration, or production safety, these frameworks save time and provide patterns that are battle-tested.
Safety, governance, and practical cautions (don’t skip these)
Human-in-the-loop: Always give a final confirmation step for sensitive actions (payments, deleting data, transfers).
Input validation: Validate tool inputs to prevent code injection or bad requests.
Logging & auditing: Keep a log of tool calls for debugging and compliance.
Rate limiting & auth: Protect expensive or privileged tools with auth and quotas.
These practices are the difference between fun experiments and safe production systems.
Final takeaways - What to do next (for you as a Node.js developer)
Start small: build 2–3 tools (search, compute, send-email). Test them independently.
Use the orchestrator pattern above to prototype agent behavior without special SDKs.
Add memory: store conversation context or task progress in a simple DB (Redis / SQLite) so the agent can remember state.
When you need scale or features (multi-agent, robust memory), evaluate frameworks: LangChain, Microsoft AutoGen, Vercel AI SDK.
Safety first: log, validate, and keep humans in the loop for any sensitive operations.
Subscribe to my newsletter
Read articles from MANOJ KUMAR directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

MANOJ KUMAR
MANOJ KUMAR
Haan Ji, I am Manoj Kumar a product-focused Full Stack Developer passionate about crafting and deploying modern web apps, SaaS solutions, and Generative AI applications using Node.js, Next.js, databases, and cloud technologies, with 10+ real-world projects delivered including AI-powered tools and business applications.