Quick AI Model Cost Estimator

Like many of you building with LLMs, I often found myself jumping between multiple documentation pages just to figure out how much a certain query would cost across different models.

And let’s be honest — no one has time to memorize OpenAI’s per-million token costs, compare them with Anthropic, DeepSeek, or Gemini, and then mentally compute costs based on input/output tokens and query types.

So… I built myself a simple cost calculator that does exactly what I need:
📍 Give me an approximate cost of running a specific type of query on a selected model.


💡 Why I Built It

I was repeatedly:

  • Searching for the latest OpenAI pricing

  • Comparing it with Claude or Gemini

  • Trying to remember if 75 words = 100 tokens or the other way around

  • Doing math in my head or a notepad every time I needed to estimate costs

It got old, fast.

So I made a small spreadsheet-based calculator that lets me:

✅ Pick a query type (normal, research, function calling, or custom)
✅ Choose a model from OpenAI, Anthropic, Google, or DeepSeek
✅ Instantly get a final estimated cost for a fixed number of queries (default: 10)


🧮 What It Does

Behind the scenes, the calculator:

  • Uses a standard 75 words = 100 tokens conversion

  • Maps query types to average input/output token counts

  • Pulls in model-specific cost per million tokens

  • Computes the total cost for the number of queries selected

And that’s it. Simple, fast, and surprisingly handy.


🔧 What's Inside?

Here’s a peek at what powers the tool:

ComponentPurpose
Query TypeSets average input/output tokens
Model SelectionPulls cost per million tokens
Token MathComputes cost per query type/model
Final CostCombines everything × number of queries

📌 Why Approximate?

This tool isn’t meant to be 100% accurate to the last decimal — it’s designed for:

  • Quick ballparks during architecture discussions

  • Budget estimates before production usage

  • Cost comparisons across providers/models

In real-world use, actual token counts vary due to system messages, model verbosity, and temperature settings. But this gives you a solid directional cost estimate.


🛠️ What's Next?

I'm thinking of extending it with:

  • Support for embedding models

  • Somehow integrate caching values

  • Batch API estimations

  • Adjustable verbosity (to estimate more or fewer output tokens)

  • Overhead token buffers for function calling

Let me know if that would be useful — or if you want a copy to use or contribute to!


🔗 Want to Try It?

https://docs.google.com/spreadsheets/d/18-TPKGPVEiYMH5I0dt9YMRQuL_--mqE-HbdxtW4vkzc/edit?gid=0#gid=0

Any feedback? Just drop a comment or DM me on Twitter/X.


💬 Ever built your own utility out of frustration? Share your tool or workflow below!

0
Subscribe to my newsletter

Read articles from Sirsho Chakraborty directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sirsho Chakraborty
Sirsho Chakraborty

Graduated from KIIT, Bhubaneswar in 2023 with a B.Tech in CS. Did my majors in AI and Computational Mathematics. For me, Covid was a blessing in disguise. I got plenty of time, staying at home, tinkering and building stuff. Tried IoT, App Development, Backend, Cloud. Did a few internships in Flutter in my second year of college. Moved to Full stack, majorly focussing on backend. Single-handedly build a Whatsapp-like video calling solution for a CA based social media company. Teaching was also a passion. So, started up an ed-tech platform with a friend, Sridipto. That's our first venture together - Snipe. Raised some capital from a Bangalore based VC during 3rd year of college. Came to Bangalore. Scaled Snipe to around a million users. But, monetisation was a challenge, downfall of ed-tech making it worse. Had to pivot. Gamification was our core. Switched to B2B model and got some early success. Few big names onboarded - Burger King, Pedigree, Saffola - few of them. Cut to 2024 September, we're team of 20+ team. Business is doing well. But realised scaling is problem. We can't just remain as a Gamification Service company. We thought, let's build something big. Let's Build the Future of Computing. The biggest learning, if you have a big problem, break it up into smaller problems. Divide and Conquer. It becomes a lot easier.