Horizon Sizing: The Spreadsheet I Made So I’d Stop Yelling at My Monitor


Disclaimer
I tried to keep everything on one sheet — simple, fast, no tab-hopping madness. But keep in mind: this is architecture by approximation. It makes assumptions. It might be wrong. And if it is… that’s just tough luck.
Use it to get a feel for things, not to sign contracts. If you want vendor-certified precision, look elsewhere. This tool is for sketching out ideas and keeping your sanity intact not for exact science. If something turns out to be off — well, that's life. It’s not precision engineering, it’s back-of-the-napkin architecture with a bit of Excel duct tape. Use it for ballparks, not boardrooms.
TL;DR
This is my one-sheet sanity saver for Horizon sizing. Not perfect. Not official. But useful.
👇 Excel download below the post.
Quick Horizon Calculation: Estimating VDI Needs Without Losing Your Mind
The recent license changes made us reconsider how to set up the cluster and figure out how much storage would be needed for the workloads we wanted to support. While trying to right-size the environment based on storage policies, GPU framebuffer requirements, and user workload assumptions, we kept running into the same problem:
Trying out multiple configuration options quickly was painful.
Switching between RAID levels, changing FTT values, estimating host usage and overcommit ratios—it all involved a lot of mental math, guesswork, or digging through vendor sizing tools that are either too rigid or try to be overly precise without reflecting real-world trade-offs.
So I did what any desperate architect with a developer background would do: I built an Excel sheet.
It’s rough. It’s imperfect. But it works.
This spreadsheet is not a 100% accurate sizing tool, nor is it vendor-approved. It’s just a quick way to explore different Horizon configurations without having to pull out a calculator or reverse-engineer someone else’s reference design. It gives you a ballpark, and for early-stage planning or technical discussions, that’s usually more than enough.
Ever tried explaining overcommit ratios, AppVolume delta growth, and GPU framebuffer allocations in one meeting without getting blank stares? Me too. That’s why this thing exists.
Sheet1: Inputs and Results
This is the interactive part. You provide:
Extra % to reserve in vSAN
It calculates:
Total framebuffer required
vCPU needs based on overbooking ratio
Estimated number of VMs per host
Minimum number of hosts needed
It also offers a general overview of how many hosts you'd need based on different storage configurations, taking into account the usable space percentages per policy.
Sheet2: Lookup Table
This contains supporting data:
Storage policies and their net usable capacity (e.g., RAID-1 FTT1 = 50%)
Common GPU types and their framebuffer sizes
This sheet feeds into the dropdowns and calculations in Sheet1.
Why Bother?
There are plenty of tools out there from VMware and vendors, but most of them are either too bloated or don’t allow quick experimentation. This Excel lets me (and now you) answer:
"What if we went with FTT=2 instead of FTT=1?"
"How many hosts would we need if we used A40s instead of L4s?"
"What’s the vCPU impact if we reduce our overcommit ratio?"
It’s also helpful when you're in a workshop or design session and need to react fast to someone’s "what-if" scenario without stalling the conversation.
How Accurate Is It?
Not very. And that’s by design.
This isn’t meant to replace official sizing tools or deep-dive calculations. It makes assumptions. It cuts corners. It doesn’t know everything about your actual workloads, but it does allow you to define VM profiles to shape assumptions around framebuffer, RAM, and CPU needs
But it’s good enough to:
Treat it like a technical napkin sketch—not a blueprint.
**
So how to use it :**
Fill in the following information based on your preferences (look at ready nodes of HCL hardware for inspiration)
Number of Users
The total number of end-users you’re sizing the environment for. This value drives the VM count and influences storage, RAM, CPU, and GPU calculations.
“Pick your RAID level… no pressure, it just defines your fault domain size, usable storage, failure tolerance, and how quickly your cluster implodes.”
Choose from pre-defined storage configurations like ESA RAID-1, RAID-5, etc. Each option has its own net usable capacity percentage (e.g., 50% for RAID-1 FTT1). This affects how much actual capacity you’ll get from your physical storage
*Note the vSAN overhead is not taken in account , please use the official vSAN calculator for this insight.
A buffer. For example, entering 20% adds 20% to the total vSAN capacity needed, increasing the usable storage for VM deployment. Useful when accounting for snapshots, slack space, or future growth.
*Note this can be used to even out the general vSAN overhead a bit
Define the redundancy model. “1” means N+1, meaning you always have one spare host worth of capacity. It’s a simple way to bake in host-level failure tolerance.
CPU's per Host
The number of physical CPU sockets in each ESXi host. This, combined with cores per CPU, determines total available physical cores.
Cores per CPU
Number of cores in each physical CPU. Total cores per host = CPU sockets × cores per CPU.
vSAN + Hypervisor CPU Overhead %
Defines the percentage of total CPU capacity reserved for system overhead (vSAN processes, hypervisor operations, etc.). This ensures you don’t overestimate what's available for running VMs.
RAM per Host
Total physical memory (in GB) installed per host.
Overbooking Factor 1:x
Defines how aggressively you overcommit vCPUs. For example, a factor of 1:4 means you assign 4 vCPUs for every physical core.
Total Volume AppVolumes (GB)
Total AppVolumes storage needed for the environment, used to factor in how much of the storage will be consumed by writable or package volumes.
GB per Core License Amount
Relevant if your licensing (like Horizon Universal) ties storage to CPU cores. This lets you estimate the amount of storage will be available in different scenario's
Select a supported GPU model. This auto-fills the framebuffer value per VM based on known specs (e.g., 24 GB for an L4, 48 GB for an A40, etc.).
# of Video Cards per Host
Number of GPUs installed in each host. This helps calculate total available framebuffer per host and GPU saturation.host
# of Power Supplies per Host
How many power supply units each host contains. Important for power budgeting and availability planning.
Watts per Power Supply
Power capacity of each PSU, used for estimating total power draw and redundancy per host.
Concurrency Factor %
This value defines how much of the available current (power) is assumed to be drawn by the equipment at peak usage. For example, an 80% concurrency factor means you're planning based on the assumption that all connected equipment will draw a maximum of 80% of their rated power at the same time. This helps size rack power requirements more realistically rather than assuming 100% simultaneous draw. This is used in the power per rack calculation to account for real-world usage rather than theoretical maximums
Power per Rack A/B Feed (KW)
Maximum available power per A-feed and B-feed in the rack (in kilowatts). The tool uses this to estimate how many hosts you can fit per rack from a power perspective.
VM Profile
At the heart of the calculator is a set of configurable VM profiles. These define the characteristics of different virtual desktop types and drive all per-user resource estimates and storage calculations, a total of 5 profiles can be defined
VM Profile Name: A label to identify the profile (VM type 1 to 5)
Base Image Size (GB): Size of the golden image used for instant clones.
Base Image Storage Profile: The vSAN policy used for storing the base image (e.g., RAID-1, RAID-5).
Base Image Versions: Number of golden image versions you want to keep for rollback/testing.
vCPU per VM: Number of virtual CPUs assigned to each VM.
RAM per VM (GB): The memory allocation for each VM.
Delta Disk avg. GB: The expected average size of the delta disk that grows over time in non-persistent environments.
Delta Disk Storage Profile: vSAN storage policy used for delta disks (can differ from base image).
Parent VM Amount: Number of parent VMs needed across clusters or compute pools.
Parent VM Storage Profile: Storage policy applied to the parent VMs themselves.
Framebuffer (GB): The required GPU framebuffer per VM, depending on the selected use case and GPU.
% Use of Profile: Distribution weight to simulate mixed environments. For instance, 80% task workers and 20% power users. This is based on the VM type and takes a % out of the defined amount of users provided in the settings. (400 users ) VM type 1 -> 25 % will assume 100 users using that profile, all % should not exceed 100.
Data Explained: Because Spreadsheets Don’t Come with Manuals
Let’s be honest—most spreadsheets age faster than documentation. One version later and suddenly no one knows what “FB Hosts +n” means anymore.
This section is here to do what Excel never will: explain itself.
Whether you're wondering why your hosts suddenly need more RAM than your users have brain cells, or why Storage became the new sizing bottleneck (spoiler: thanks, licensing), this part walks you through what each number means, what it’s based on, and why it matters.
You’ll finally know if your power draw justifies a second rack, or if your delta disks are quietly plotting budget sabotage.
Let’s dive in—with less guessing and more clarity.
This section calculates aggregate resource requirements based on the defined VM profiles, user count, and host configuration:
VM's Needed: Total number of virtual desktops calculated from the number of users and distribution across VM profiles.
Total Frame buffer All VM: Combined GPU framebuffer needed across all VMs, based on the profile's framebuffer values and usage percentage.
Total RAM (GB): Total memory required to run all VMs, calculated from the per-VM RAM values.
Total Cores Needed workload: Raw number of physical CPU cores needed to support the VM workloads before considering hypervisor or vSAN overhead.
Total Cores Needed inc vSAN: CPU core count adjusted to include estimated overhead for the hypervisor and vSAN services.
Total AppVolumes Storage: Combined writable and package volume footprint required by AppVolumes.
Total Storage all VM's: Includes base image size, delta disks, parent VMs, and all storage elements across VM profiles.
Total Storage: Net storage required after applying usable capacity percentages from the selected vSAN storage profiles.
Total Storage with Extra Space %: Adds a safety buffer on top of the raw required storage to accommodate slack space, snapshot growth, and operational flexibility.
Stats Per Host
These are calculated from a single host’s perspective and help validate sizing constraints and infrastructure impact:
Frame buffer: Total available GPU framebuffer on the host.
RAM (GB): Total installed RAM per host.
P cores: Number of physical cores available.
vCores based on overbooking: Logical CPU capacity after applying overcommit ratio.
Storage max based on license (GB): Limits imposed by licensing models that tie core count to allowed storage per core.
VM’s average based on cores: Estimated VM capacity based on CPU availability.
VM’s average based on RAM: VM capacity based on RAM constraints.
VM’s average based on Frame buffer: VM capacity based on GPU framebuffer capacity.
Total Power consumption needed (watts): Peak power demand of the host under full load.
Total Power active (watts): Power available from the host’s configured PSUs.
Total Power consumption Concurrency % (watts): Realistic expected power draw, factoring in concurrency.
Total Resource Needed Based on Profiles
This section summarizes the total raw infrastructure requirements based directly on the VM profiles you’ve defined — before factoring in usable storage percentages or host distribution. These values reflect the full environment total:
Total vCPU: Sum of all virtual CPUs across all VMs, considering profile usage ratios.
Total RAM: Combined RAM requirements based on per-VM allocations from the profiles.
Total BaseIMG Storage: Total space required to store all golden image versions.
Total Parent Storage: Combined footprint of parent VMs used for instant clone provisioning.
Total Delta Storage: Estimated cumulative size of all delta disks, based on profile expectations.
Total Storage: Sum of base image, parent, and delta disk storage combined.
Total Framebuffer: Aggregate GPU framebuffer required across all VMs based on selected GPU profiles and usage percentages. * when mixing profiles (different FB) it is seen as a big pool. In other words the calculation assumes profile mixing on gpu's.
Final Host Calculations
This section pulls together the final host-level sizing outcomes based on profile totals, infrastructure constraints, and overcommit logic:
Hosts Needed: Number of hosts required to support all VMs based on limiting resources (CPU, RAM, framebuffer, or storage).
% VMs on Hosts (avg. all VM types): Indicates average VM saturation level across all hosts.
Storage License Cores GB: Total licensed storage capacity calculated from core-based licensing limits.
RAM Based on Hosts: Total RAM capacity aggregated across the calculated number of hosts.
FB Based on Hosts: Total framebuffer capacity based on GPU configuration per host.
Hosts with N+X: Number of hosts required including redundancy (e.g., N+1, N+2).
% VMs on Hosts (avg. all VM types): Updated saturation level including redundant hosts.
Storage Hosts +n: Storage based on n+x in cluster
RAM Hosts +n: Ram based on n+x in cluster
Sizing Validation: What the “OK” Matrix Shows
This table is designed to validate whether the resource needs derived from your VM profiles fit within the available host capacity — both in a minimal setup and with redundancy (n+x). Each row checks one resource constraint. Each column tells you whether that specific constraint is satisfied, per setup.
What Each Row Represents
Each row simulates a different sizing assumption or limiting factor:
Row Label | Meaning |
Minimum Based on RAID (baseline) | Smallest number of hosts that satisfies RAID constraints (usually minimum 3 for RAID-5/6). |
User Core Based | Number of hosts needed based on physical CPU cores (pCores), after vCPU overcommit and overhead. |
Storage Based | Hosts needed to provide enough usable storage (base image + delta + parent). |
RAM Based | Hosts required to provide enough RAM. |
GPU Based | Hosts needed to provide enough GPU framebuffer (vGPU memory) for all VMs. |
🖥️ What Each Column Represents
🟥🟩 Color Coding (Conditional Formatting)
Because nothing says “you screwed up” like a red cell.
You’ll notice that some fields light up like a Christmas tree — and that’s intentional.
🟩 Green = You're good. Go ahead, spin up those VMs and flex in your status meeting.
🟥 Red = Nope. Something’s off. Could be RAM, could be framebuffer, could be that you tried to fit 100 VMs on a host that barely runs Solitaire.
The colors act as your built-in sanity check. They instantly show whether the current config fits within resource limits or if you’ve just architected a thermal event. And since most of us don’t read numbers as fast as we see red flags, this gives you a visual gut-check at a glance.
You’ll thank this later when you're presenting and someone asks, “But are we sure this host config really fits the RAM limits?”
How to Read It
Find the first row where all columns are green. This is your minimal viable host count based on that constraint. For example:
If GPU based is the only row with all green cells → GPU is your bottleneck.
If Storage Based and RAM Based are green but User Core Based is red → You are CPU-constrained.
Base Cluster Power Overview
Example Use Case:
Let’s say your RAM-based cluster needs 18,000 watts total:
At 80% concurrency = 14,400 W or 14.4 kW
Split evenly: A-feed and B-feed each = 9,000 W = 9 kW full load
32A group (7.36kW max) = needs ~2 groups per feed
Concurrent load = 14.4 kW → 1 group per feed could be enough
This is also calculated for a cluster with n+x nodes
Cabinet Power Requirements (Full Load vs. Concurrency)
This table breaks down how many physical data center racks (cabinets) are needed to host the cluster, depending on different bottlenecks (CPU, storage, RAM, GPU). It’s divided into:
• Cabinets needed full load
This shows how many racks you’d need if every host runs at 100% power usage simultaneously. This is the worst-case power requirement.
• Cabinets needed concurrent factor
This accounts for a realistic concurrency factor (usually ~80%), meaning not all equipment runs at full load all the time. This is a more real-world number for how many racks you need based on simultaneous power draw.
This table helps determine how many physical racks you’ll need for:
Let’s say we want to estimate the number of hosts needed for a Horizon VDI setup supporting 400 users, using ESA RAID-5 (FTT1), and the following assumptions:
CPUs per host: 2
Cores per CPU: 64
From the VM profile:
Base image: 500 GB
This gives the following requirements
Frame buffer: 128 GB
P cores: 128
vCores based on overbooking: 512
VMs (average based on cores): 128
VMs (average based on RAM): 128
Total Power Consumption Needed (watts): 6400 W
Total Power Consumption Concurrency % (watts): 2560 W
Stats Needed Resources
• VM's Needed: 400
• Total Frame buffer All VM: 1600
• Total RAM (GB): 3200
• Total Cores Needed workload: 400
• Total Cores Needed inc vSAN: 440
• Total AppVolumes storage: 2667
• Total storage all VM's: 10000
• Total Storage: 12667
• Total Storage with extra space %: 13934
Calculation Results: What the Sheet Spits Out (and Why)
After filling out the inputs, the sheet calculates how many hosts you'd need under different sizing assumptions — and whether each dimension (storage, cores, RAM, framebuffer) is sufficient in that scenario. You’ll see both the "just enough" (min config) and the "with N+x" (resilient) setups.
Sizing Basis | Hosts Needed | %VMs per Host | Storage Capacity | RAM | Framebuffer |
RAID baseline | 4 | 116 | OK | OK | ❌ (Fails) |
User Core based | 4 | 116 | OK | OK | ❌ |
Storage based | 2 | 58 | ❌ (Fails) | ❌ | ❌ |
RAM based | 4 | 116 | OK | OK | ❌ |
GPU based | 13 | 378 | OK | OK | OK |
Here’s how to read this:
Minimum Based on RAID: 4 hosts get you enough vCPU, RAM, and storage capacity using RAID-5 (ESA FTT1) — but framebuffer is the bottleneck. You can’t support all 400 users unless you add more GPUs or hosts.
User Core Based: CPU-bound sizing also needs 4 hosts, and just like above, framebuffer is the first thing to blow up.
Storage Based: You could theoretically do it with just 2 hosts from a storage perspective, but that assumes no constraints on RAM or vGPU. In practice, this won’t fly.
RAM Based: Similar to CPU-based sizing — good across the board except for framebuffer.
GPU Based: This is your real constraint. It takes 13 hosts (with 2 A16 cards per host) to provide enough framebuffer (1,600 GB total) to handle all 400 users. Only here do you get green across the board.
The "n+1" section just adds a redundant host to each config. It shows:
If you see red in the n+x section, it means that even with redundancy, the setup doesn’t meet the requirement.
Power and Cabinet Sizing: Because Electrons Cost Money Too
Let’s face it — you can do all the VM and GPU sizing in the world, but if your racks can't power it, you're just building a spreadsheet-based fantasy.
This section breaks down the total power requirements per cluster setup, based on both full load and realistic concurrency factors (80%), and maps that to rack-level power groups and cabinet needs.
Power Calculations
Sizing Basis | Total Power (W) | Power A Feed (kW) | Power B Feed (kW) | Concurrency Power A+B (kW) |
RAID Baseline | 25,600 | 12.8 | 12.8 | 20.48 (80% of total) |
User Core Based | 25,600 | 12.8 | 12.8 | 20.48 |
Storage Based | 12,800 | 6.4 | 6.4 | 10.24 |
RAM Based | 25,600 | 12.8 | 12.8 | 20.48 |
GPU Based | 44,800 | 22.4 | 22.4 | 35.84 |
🟨 In short: GPU-based configurations eat power for breakfast.
N+1 Power Requirements
When you factor in a redundant host (N+1):
Sizing Basis | Total Power (W) | Power A Feed (kW) | Power B Feed (kW) | Concurrency Power A+B (kW) |
RAID Baseline (N+1) | 32,000 | 16 | 16 | 25.6 |
Storage Based (N+1) | 19,200 | 9.6 | 9.6 | 15.36 |
GPU Based (N+1) | 51,200 | 25,6 | 25,6 | 40,96 |
Power Group Calculations (Per Feed Phase)
32A groups @ 230V deliver ~7.36 kW
16A groups @ 230V deliver ~3.68 kW
Cluster Type | 32A Groups (Full Load) | 16A Groups (Full Load) | 32A Groups (Concurrent) | 16A Groups (Concurrent) |
RAID Baseline | 4 | 7 | 3 | 6 |
Storage Based | 2 | 4 | 2 | 3 |
GPU Based | 7 | 13 | 5 | 10 |
RAID Baseline (N+1) | 5 | 9 | 4 | 7 |
GPU Based (N+1) | 7 | 14 | 6 | 12 |
Cabinet Sizing
Finally, based on power distribution and feed concurrency:
Sizing Basis | Cabinets Needed (Full Load) | Cabinets Needed (Concurrent) |
RAID / CPU / RAM | 1 | 1 |
Storage Based | 1 | 1 |
GPU Based | 2 | 2 |
GPU Based (N+1) | 3 | 2 |
So yes, going GPU-heavy will bump you from 1 to 2+ cabinets, even at 80% concurrency — especially when using A16 cards in quantity.
Final Thoughts
The Excel sheet is partially locked — just enough to prevent accidental edits to key formulas and structure. It’s intentionally limited to ensure it’s used as a calculator, not as a sandbox for changes.
I originally built this for myself after too many late nights staring at sizing docs that contradicted each other. Then I figured, why not share it? If it saves you a bit of time or helps you explain sizing to someone else without opening a 60-slide vendor deck, that’s a win.
It’s a living file — and like all living things, occasionally it mutates. I might update it when licenses change, GPUs evolve, or when I decide to fix something I broke. If you think something’s off — or believe it needs dramatic improvement — drop me a heads-up.
And if you’re feeling brave: leave a comment. Whether you loved it, loathed it, or just want to tell me my math is way off — feedback is welcome.
Just please… don’t let procurement use it as a contractual capacity plan.
I really hope this helps others get a clearer view of what’s often a black-box discussion. If it makes sizing feel just a little less like guessing, I’ll consider that a win.
Enjoy,
Mark
Subscribe to my newsletter
Read articles from Mark Platte directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Mark Platte
Mark Platte
Mark Platte Born in January, 1984.I currently work as an IT architect in a hospital environment, with a career shaped by hands-on experience in application development, storage engineering, and infrastructure design. My roots are in software, but over time I moved deeper into system architecture, working closely with storage platforms, virtualization, and security especially in regulated and research-intensive environments. I have a strong focus on building stable, secure, and manageable IT solutions, particularly in complex environments where clinical systems, research data, and compliance requirements intersect. I’m especially experienced in enterprise storage design, backup strategies, and performance tuning, often acting as the bridge between engineering teams and long-term architectural planning. I enjoy solving difficult problems and still believe most issues in IT can be fixed with enough determination, focus, and sometimes budget. It’s that drive to find solutions that keeps me motivated.