How I Built a Multi-Agent AI Analyst Bot Using GPT, LangGraph & Market News APIs


Yes, I know — this title sounds like one of those overly long anime names. “That Time I Built a GPT Bot That Read the Financial News So I Didn’t Have To” 🧙♂️📉
1. Intro: The Pain
I used to wake up and check four different sites, scroll Twitter, and still feel behind on what actually happened in the markets.
Meanwhile, stuff like this was happening:
https://tenor.com/view/pawchain-pawswap-paw-crypto-cryptocurrency-gif-10316912554025329858
Markets pumping. Dumping. Headlines flying.
Now, I get one clean email every morning with exactly what I need: key headlines, a few bullet points of summary, and a tone check on the overall market mood — powered by GPT and a financial news API.
Here’s how I built it in under an hour — and how you can too.
2. The System
🛠 Stack:**
Finlight.me — Financial news API for clean, market-relevant headlines
LangGraph — Multi-agent GPT orchestration framework (built on LangChain)
Python — With a cron job or AWS Lambda for scheduling
SMTP — To deliver the final email briefing
🧠 Agents:**
Analyst Agent
Pulls news for a given subject → summarizes → classifies tone (bullish, bearish, neutral)Composer Agent
Takes all outputs from the Analyst and assembles the final morning email
🔄 Flow:**
⏰ Script runs daily at 6:45am
For each subject (e.g., Inflation, AI Stocks) the Analyst Agent:
- Queries relevant news via API
- Summarizes key points
- Assesses sentiment (bullish/bearish/neutral)After all subjects are processed:
- The Composer Agent creates the full morning briefing📬 Email lands in your inbox before 7:00am
3. What the resulting email looks like
🗓 April 13, 2025 – Market Briefing
• Trump Tariffs Escalate 🇺🇸🇨🇳
- The U.S. imposed a 145% tariff on Chinese imports, while China retaliated with a 125% tariff, urging a complete removal of these measures.
- U.S. Commerce Secretary hinted at upcoming tariffs on semiconductors to promote domestic production.
- Despite concerns of inflation, layoffs, and supply chain disruptions, exemptions on smartphones and laptops provided a temporary boost for tech giants like Apple and Nvidia.
- Market sentiment remains highly volatile with fears of a U.S. recession and worsening global trade dynamics.
• China Tightens Its Stance 🇨🇳
- China halted critical rare earth exports, exacerbating the trade war and affecting key industries like semiconductors and aerospace.
- Beijing dismissed U.S. tariff exemptions as insufficient and continued to push for cancellation of all reciprocal tariffs.
- On the global stage, India’s and Brazil’s industries are reaping benefits as alternatives in electronics and agriculture.
- Allegations of Chinese cyberattacks targeting U.S. infrastructure have further strained relations.
- Domestically, Hong Kong’s last major opposition party moved towards disbandment, while Britain rejected Chinese involvement in its steel sector, fuelling political tensions.
📊 Sentiment: Uncertain and risk-averse, with increasing global market volatility. Investors are eyeing alternative regions like India for stability.
📊 Sentiment: Uncertain and risk-averse, with increasing global market volatility. Investors are eyeing alternative regions like India for stability.
This is what lands in my inbox every day. 30 seconds to understand the market mood.
4. Inside the System: The Multi-Agent Morning Briefing
Now that you’ve seen the output, let’s unpack how it works under the hood.
At a high level, this is a multi-agent workflow that runs on a schedule. Each agent handles a specific task — fetching news, summarizing, analyzing tone, composing a message — and they all pass info via shared state.
Everything is built in Python using LangGraph, a framework for agent orchestration.
🔌 4.1: Pulling News with the Finlight API
We use a small wrapper around a financial news API — Finlight.me — to get clean, focused headlines for a given subject.
Why Finlight? Because it gives you way less noise than other general news APIs. And also… because I built it. 😎
Here’s the tool I use inside the system — it wraps the Finlight SDK as a LangChain-compatible StructuredTool
:
# tools/finlight.py
from langchain.tools import StructuredTool
from finlight_client import FinlightApi
from finlight_client.models import BasicArticleResponse, ApiConfig, GetArticlesParams
from pydantic import BaseModel, Field, field_validator
from market_briefing.config import FINLIGHT_API_KEY
class GetBasicArticle(BaseModel):
query: str = Field(..., description="Search term, e.g., 'Nvidia'")
from_: str = Field(..., alias="from", description="ISO 8601 start time")
to: str = Field(..., description="ISO 8601 end time")
pageSize: int = Field(100, description="Number of articles per page")
page: int = Field(1, description="Page number")
model_config = {"populate_by_name": True}
def search_finlight_articles(params: GetBasicArticle) -> str:
client = FinlightApi(config=ApiConfig(api_key=FINLIGHT_API_KEY))
api_response: BasicArticleResponse = client.articles.get_basic_articles(
params=params
)
if not api_response:
return "No articles found."
return "\n".join(
f"Title: {a['title']}\nDate: {a['publishDate']}\nSummary: {a['summary'] or 'No summary.'}\n{'-'*30}"
for a in api_response["articles"]
)
search_tool = StructuredTool.from_function(
func=search_finlight_articles,
name="search_finlight_articles",
description="Search Finlight for news by keyword and date range",
)
This allows agents in your system to call Finlight dynamically using structured input and output — with proper validation via Pydantic, and easy chaining through LangChain or LangGraph.
🧠 Note: This input will later be fed into GPT for summarization and sentiment.
4.2: The Shared State
All agents share one data structure to read/write context. It’s a simple Python TypedDict
, but it powers the whole pipeline.
from typing import List, Dict, TypedDict
class BriefingState(TypedDict):
subjects: List[str]
current_index: int
analyst_outputs: Dict[str, str]
briefing: str
subjects
: list of user-defined topics (like ["inflation", "oil", "semiconductors"])current_index
: used for looping through one subject at a timeanalyst_outputs
: filled in by the Analyst Agent and consumed by the Composerbriefing
: filled in by the Composer Agent with the final result
🧠 Why this matters: it allows us to loop over multiple topics with a single reusable agent*, and conditionally switch to the next phase when done.*
4.3: The LangGraph Logic
The diagram shows the core agent flow:
Analyst Agent is the workhorse. It loops through each subject (like “Inflation” or “AI Stocks”), pulling and analyzing news one at a time.
After each subject, it either:
- 🔁 Loops back to process the next subject
- ➡️ Passes control to the Composer Agent once all subjects are processedComposer Agent then compiles the final email briefing.
This loop-until-done + handoff pattern is a core use case for LangGraph: small, focused agents collaborating via shared state.
from langgraph.graph import StateGraph, END
from market_briefing.workflow.state import BriefingState
from market_briefing.agents.analyst import analyst_agent_node
from market_briefing.agents.composer import compose_briefing_node
def increment_index_node(state: BriefingState) -> dict:
return {"current_index": state["current_index"] + 1}
def should_continue(state: BriefingState) -> str:
return (
"continue" if state["current_index"] + 1 < len(state["subjects"]) else "format"
)
def build_graph():
graph = StateGraph(BriefingState)
graph.add_node("analyst", analyst_agent_node)
graph.add_node("increment_index", increment_index_node)
graph.add_node("composer", compose_briefing_node)
graph.add_conditional_edges(
"analyst",
should_continue,
{"continue": "increment_index", "format": "composer"},
)
graph.add_edge("increment_index", "analyst")
graph.set_entry_point("analyst")
graph.add_edge("composer", END)
return graph.compile()
This defines a clean LangGraph flow:
Start with the Analyst Agent
If there are more topics, increment index and repeat
When all topics are done, switch to Composer Agent
Finish at
END
🧠 Modular and readable. Each agent focuses on one job — no giant monolith functions.
4.4: The Analyst Agent
This is the workhorse. For each subject, it:
Pulls relevant news from Finlight
Passes it to GPT for summarization
Extracts sentiment
Stores the result in the state
Here’s the full code:
from datetime import datetime, timezone
from langchain_core.messages import HumanMessage
from market_briefing.llm.executor import agent_executor
from market_briefing.workflow.state import BriefingState
import logging
logger = logging.getLogger(__name__)
def analyst_agent_node(state: BriefingState) -> dict:
subject = state["subjects"][state["current_index"]]
now = datetime.now(timezone.utc)
from_time = now.replace(hour=0, minute=0, second=0, microsecond=0)
to_time = from_time.replace(day=from_time.day + 1)
iso_from = from_time.isoformat() + "Z"
iso_to = to_time.isoformat() + "Z"
prompt = f"""
Summarize financial/political developments on '{subject}' in the last 24h (from {iso_from} to {iso_to}).
Include what happened, market sentiment, and confidence if available. Keep it short.
"""
logger.info(f"📤 Prompt to analyst agent:\n{prompt}")
result = agent_executor.invoke({"messages": [HumanMessage(content=prompt)]})
outputs = state.get("analyst_outputs", {})
outputs[subject] = result["messages"][-1].content
logger.info(f"📝 Final formatted briefing:\n{outputs[subject]}")
return {"analyst_outputs": outputs}
🧠 Prompt Design Tip: Keep structure consistent so parsing is easy. You want predictable outputs if you ever want to use this downstream.
4.5: The Composer Agent
Once the loop is done, the Composer takes over. It assembles the full email — Markdown-style — by summarizing all the analyst outputs into a single readable message.
Here’s the code:
from datetime import datetime, timezone
from langchain_core.messages import HumanMessage
from market_briefing.llm.executor import agent_executor
import logging
logger = logging.getLogger(__name__)
def compose_briefing_node(state: dict) -> dict:
now = datetime.now(timezone.utc).strftime("%B %d, %Y")
logger.info("🧩 Composing final morning market briefing...")
logger.info(f"📦 Current subjects and summaries: {state['analyst_outputs']}")
joined_summaries = "\n\n".join(
f"🔹 {subj}\n{summary.strip()}"
for subj, summary in state["analyst_outputs"].items()
if summary and isinstance(summary, str)
)
formatting_prompt = f"""
You are a professional financial news editor. Format the following topic summaries into a clean, well-presented morning market briefing.
------------
✅ Do:
- Use headlines
- Add emojis and good spacing
- Improve clarity where needed
- Skip sections that have no information available
- Dont repeat yourself
🚫 Do not:
- Guess or fill in missing content
- Say anything unrelated to the actual summaries
🗓️ Date: {now}
------------
Use this template:
🗓 <Date in Month, Day - Year> – Market Briefing
• <Bullet Point 1>
• <Bullet Point 2>
...
• <Bullet Point n>
📊 Sentiment: <General sentiment>
------------
Here are the summaries:
{joined_summaries}
"""
logger.info(f"📤 Prompt to composer agent:\n{formatting_prompt}")
result = agent_executor.invoke(
{"messages": [HumanMessage(content=formatting_prompt)]}
)
final_message = result["messages"][-1].content
logger.info(f"📝 Final formatted briefing:\n{final_message}")
return {"briefing": final_message}
💡 This is where tone and readability matter. The analyst agent gives raw data — the composer agent gives it polish.
4.6: Running the Whole Thing
You trigger the whole workflow from a handler script. This is where subjects are defined and the LangGraph pipeline is run:
from market_briefing.config import SUBJECTS
from market_briefing.utils.html_formatter import markdown_to_html
from market_briefing.sender.email_sender import send_daily_email
from market_briefing.workflow.graph import build_graph
import logging
logging.basicConfig(level=logging.INFO)
def main(event=None, context=None):
state = {
"subjects": SUBJECTS,
"current_index": 0,
"analyst_outputs": {},
}
graph = build_graph()
result = graph.invoke(state)
print("✅ Morning Briefing Generated:\n", result["briefing"])
briefing = result["briefing"]
html = markdown_to_html(briefing)
send_daily_email(html)
return {"statusCode": 200, "briefing": result["briefing"]}
Simple, clean, and callable from cron, Lambda, or a notebook.
4.7: Sending the Email
Once the Composer Agent builds the final message, we need to send it as an email. Here’s the full implementation:
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from market_briefing.config import EMAIL_PASS, EMAIL_SUBJECT, EMAIL_TO, EMAIL_USER
def send_daily_email(html_content: str):
# Create MIME message
msg = MIMEMultipart("alternative")
msg["From"] = EMAIL_USER
msg["To"] = EMAIL_TO
msg["Subject"] = EMAIL_SUBJECT
# Attach HTML content
msg.attach(MIMEText(html_content, "html"))
# Send email
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
server.login(EMAIL_USER, EMAIL_PASS)
server.send_message(msg)
This uses standard smtplib
+ MIMEText
to format and send HTML emails via Gmail.
📫 Pro Tip: You can easily swap in any other provider — Mailgun, SendGrid, Outlook — or even push to Slack or Telegram.
4.8: Going Serverless with AWS Lambda
Want your system to run every morning, even while you sleep? Here’s how I deployed it with the Serverless Framework:
service: market-briefing
plugins:
- serverless-python-requirements
provider:
name: aws
runtime: python3.11
region: us-east-1
timeout: 60
memorySize: 512
environment:
PYTHONPATH: /var/task/
OPEN_AI_API_KEY: ${env:OPEN_AI_API_KEY}
FINLIGHT_API_KEY: ${env:FINLIGHT_API_KEY}
EMAIL_USER: ${env:EMAIL_USER}
EMAIL_PASS: ${env:EMAIL_PASS}
EMAIL_TO: ${env:EMAIL_TO}
EMAIL_SUBJECT: ${env:EMAIL_SUBJECT}
iam:
role:
statements:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: "*"
functions:
briefer:
handler: market_briefing/handler.main
events:
- schedule:
rate: cron(0 7 * * ? *) # every day at 07:00 UTC
enabled: true
package:
patterns:
- '!**'
- market_briefing/**
custom:
pythonRequirements:
dockerizePip: true
slim: false
strip: false
fileName: requirements.txt
This will:
Deploy your Python app as a Lambda function
Schedule it to run daily (cron-style)
Keep your environment variables safe with
.env
or AWS Secrets Manager
☁️ Tip: If you’re using Ollama or local models, you can skip this and run it via a cron job on your own machine.
Bonus: The Config File
from dotenv import load_dotenv
import os
load_dotenv()
OPEN_AI_API_KEY = os.environ["OPEN_AI_API_KEY"]
FINLIGHT_API_KEY = os.environ["FINLIGHT_API_KEY"]
EMAIL_USER = os.getenv("EMAIL_USER")
EMAIL_PASS = os.getenv("EMAIL_PASS")
EMAIL_TO = os.getenv("EMAIL_TO")
EMAIL_SUBJECT = os.getenv("EMAIL_SUBJECT", "Daily Email")
SUBJECTS = ["Trump tariffs", "Nvidia", "China"]
You can easily:
Switch out topics
Plug in new LLM providers
Redirect output to different email addresses
Recap: How It All Comes Together
Let’s tie it all together:
You define what you care about (in
SUBJECTS
)Each subject is processed by the Analyst Agent, which pulls news, summarizes it, and scores sentiment
Results are stored in a shared state
When all subjects are processed, the Composer Agent assembles a clean, skimmable morning email
The Email Sender delivers it to your inbox
All of this runs daily via cron or Lambda
It’s modular, clean, and surprisingly easy to extend.
🚀 What’s Next?
Right now, this works great with a static list of subjects. But there’s so much potential to go further.
Here’s what I’m exploring next:
Smart Topic Detection
Instead of passing predefined subjects, let an LLM scan the entire news feed, detect key themes, and generate briefings dynamically.
Portfolio-Aware Briefings
Pull in your portfolio holdings or watchlist, and prioritize news that impacts your assets.
Local Execution
Swap OpenAI for Mistral, Mixtral, or Gemma running locally via Ollama, for:
Full data privacy
No API costs
Offline support
More Data Sources
Add financial calendars, earnings transcripts, Twitter trends, Reddit sentiment, or macro indicators.
💻 Want the full code?
I’m open-sourcing the entire repo. Just drop a comment if you want early access to the GitHub link. You can run it locally or in the cloud — whatever fits your setup.
Subscribe to my newsletter
Read articles from Ali directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ali
Ali
Backend developer & founder of Finlight. I like building useful things — from news APIs to AI tools. Writing here to share more of what I build.