CheapSeek: How to build your own AI Server

Sujay NSujay N
2 min read
PartCostSpec
PC~$80HP EliteDesk 800 g4 SFF: i5-7500 3GHz, 8 GB RAM
Extra RAM~$904×16GB DDR4-2666
Hard Disks~$452×1TB
Graphics Card~$235Nvidia T1000 8GB
Total$450Incl Shipping and Tax

DeepSeek r1 released earlier this year indicated an inflection point. LLMs are now good enough and the competition for cheaper, smaller models has begun. This was exciting because there’s now a decent Open Source model that I can run completely locally without giving my money and/or data to OpenAI, Anthropic or the like.

The above table is all that you need to run the deepseek-r1:8b model. It’s not going to write your thesis but is perfectly capable of a neat Fibonacci program. I bought all of the components used from eBay and there are plenty to go around. As you can see, I splurged on the Graphics Card. But that is one component that has the biggest ROI:

Here’s me running a much smaller model deepseek-r1:1.5b before I installed a Graphics card. Spoiler alert: it takes ~1min and 15s to generate a joke

Now here’s the same prompt after I installed the GPU. It’s <10s!:

Of course, what it generates is nonsensical. The model is only as smart as before, just faster. However, the 8b model which was completely unusable without the GPU, now performs perfectly. That one is capable of (only slightly) better jokes, but more importantly - it can understand and generate basic code to interact with other services. More on that in the future…

0
Subscribe to my newsletter

Read articles from Sujay N directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sujay N
Sujay N