Installing Proxmox VE on an Intel NUC 10.

Brian KingBrian King
13 min read

TL;DR.

This post is a comprehensive walk-through on how I install Proxmox VE on an Intel NUC 10. I cover the step-by-step installation process, and tips for optimizing the virtual environment. This article is ideal for tech enthusiasts who want to maximize the capabilities of their Homelab by setting up a robust virtualization platform that supports both containers and virtual machines.

Attributions:

A post from Proxmox ↗ on setting up a Proxmox VE virtual machine ↗, and

A video from Tailscale about installing Proxmox VE on a PC ↗.


An Introduction.

Containers and virtual machines are technologies that allow operating systems and applications to be isolated within a runtime environment. Depending on the hardware specifications, Proxmox VE allows multiple containers and virtual machines to run on a single PC:

The purpose of this post is to demonstrate how I install Proxmox VE and create a container.


The Big Picture.

Learn how I efficiently install Proxmox VE on an Intel NUC 10 with this comprehensive guide. Discover the prerequisites, step-by-step installation process, and tips that I use to optimize my virtual environment setup. Proxmox is perfect for tech enthusiasts looking to maximize their Homelab PCs.


Prerequisites.

  • A Debian-based Linux distro (I use Ubuntu),

  • A USB Thumb Drive.


Updating my Base System.

  • From the (base) terminal, I update my (base) system:
sudo apt clean && \
sudo apt update && \
sudo apt dist-upgrade -y && \
sudo apt --fix-broken install && \
sudo apt autoclean && \
sudo apt autoremove -y

NOTE: The Ollama LLM manager is already installed on my (base) system.


What is an Intel NUC?

An Intel NUC (Next Unit of Computing) is a small-form-factor computer designed by Intel, which offers a compact and powerful computing solution. This PC typically come without RAM, storage, or an operating system, allowing me to customize my system according to my needs.

NUC Specifications.

ModelBXNUC10i3FNHN
ProcessorIntel i3-10110U 2.10GHz Dual Core, 4 Threads, Up to 4.10GHz, 4MB SmartCache
MemoryDual Channel, 2x DDR4-2666 SODIMM slots, 1.2V, 64GB maximum
GraphicsIntel UHD Graphics, 1x HDMI 2.0a Port, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C
AudioUp to 7.1 surround audio via HDMI or DisplayPort signals, Headphone/microphone jack on the front panel, dual array front mics on the chassis front
Peripheral Connectivity1x HDMI 2.0 Port with 4K at 60Hz, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C, 1x Front USB 3.1 Type A (Gen 2) Port, 1x Front USB 3.1 Type-C (Gen 2) Port, 2x Rear USB 3.1 Type A (Gen 2), 2x Ethernet Ports, 2x Internal USB 2.0 via header
Bluetooth
Storage1x M.2 22x42/80 (key M) slot for SATA3 or PCIe X4 Gen3 NVMe, SATA Interface, SDXC slot with UHS-II support
NetworkingIntel Wi-Fi 6 AX201, Bluetooth, i219-V Gigabit Ethernet
Power Adapter19VDC Power Adapter

Hardware Specifications.

Memory64GB
Storage256GB M.2 internal, 256GB SSD internal, 2TB HDD external
OSA modified Debian LTS kernel running under Proxmox VE

What is Proxmox VE?

Proxmox VE (Virtual Environment) is an open-source virtualization platform designed for setting up hyper-converged infrastructure and, under the GNU AGPLv3 license, can be used for commercial purposes. It lets me deploy and manage containers and virtual machines. Proxmox VE is built on a modified Debian LTS kernel, and supports two types of virtualization: containers with LXC and virtual machines with KVM. Proxmox VE features a web-based management interface, and there is also a mobile app available for managing PVEs (Proxmox Virtual Environments).


Creating a Proxmox VE Installation Thumb Drive.

  • I plug the thumb drive into my PC.

  • I start the balenaEtcher-1.14.3-x64.AppImage imaging utility that runs on Ubuntu.

  • I select the 1.57GB ISO file as the source, the 32GB thumb drive as the target, and then I click the blue Flash button.

  • After the ISO has been successfully flashed onto the thumb drive, I eject the thumb drive and remove it from my PC.

Installing Proxmox VE.

NOTE: Proxmox VE requires at least 3 drives that are directly connected to the NUC. I have an internal 256GB M.2 drive that uses the NVMe interface labelled prox-int-nvme, an internal 256GB SSD that uses the SATA interface which has been split into 2 × 128GB partitions labelled prox-int-sata1 & prox-int-sata2, and an external 2TB HDD that uses the USB 3.0 interface labelled prox-ext-usb3. These configurations will be altered during the Proxmox VE setup process.

  • I plug the Proxmox VE installation thumb drive into my NUC.

  • I power up my NUC.

  • I follow the installation instructions.

  • I use the following network settings that work on my LAN:

NOTE: During the Management Network Configuration setup, I can use IPv4 or IPv6 but I CANNOT mix the 2 protocols. The Management Interface is the name of the NIC (Network Interface Card) that is installed in the NUC which, in my case, is eno1. The Hostname (FQDN) only matters if I intend to open, and host, Proxmox VE over the Internet. The IP Address (CIDR), as assigned by my router, is 192.168.0.50 with a subnet mask of 24 (255.255.255.0) that defines the subnet in my LAN to which this device belongs. The Gateway is the IP address of my router, which is 192.168.0.1, and is needed to connect the NUC to the Internet. The DHCP server is found at 192.168.0.1 because my router includes the server that is responsible for assigning IP addresses.

  • After installation, the NUC will reboot.

  • At this time, I remove the USB installation thumb drive.

  • At the login screen, I make a note of the Proxmox VE IP address and :port number that is displayed.

  • On a PC that is connected to the same network as Proxmox VE, I open a browser, visit the IP address and :port, and bookmark that address.

  • At the browser login screen, my user name is ‘root’ and my ‘password’ is the same one I gave during the installation.

  • From the terminal, I SSH into Proxmox VE with root@ip_address and password.


A Note about Routers and DHCP Servers.

My router has 2 jobs:

  • Connect to an ISP (Internet Service Provider) that, in turn, provides access to the Internet, and

  • Route that connection to all the wired, and wireless, devices that share the LAN (Local Area Network).

As the name suggests, Internet connectivity is routed to all of the linked devices in the LAN. There are many devices, like smart phones, tablets, PCs, notebooks, and others, that use an Internet connection to improve their functionality. Many devices, like smart TVs, require that connectivity.

The problem is that all the devices that connect to the LAN require unique identifiers. These identifiers are called IP addresses. But where does a device get an IP address? To solve the IP address problem, my router has a built-in DHCP server where DHCP stands for Dynamic Host Configuration Protocol. Almost all routers have a DHCP server and the purpose of this server is to assign a dynamic IP address to every wired, and wireless, device in the LAN.

In most cases, each device in the LAN is dynamically, i.e. automatically, assigned an IP address from a pool of available, unassigned addresses. Most often, devices will use the same dynamic IP addresses when they connect to the LAN, but sometimes the DHCP server will issue new dynamic IP addresses. This is a fine solution and is NOT a problem. In most cases.

Servers, however, are special use cases. Proxmox VE, as well as the containers and virtual machines it manages, require static IP addresses. My Pi5 SBCs (Single Board Computers) will also need static IP addresses if they want to become nodes in my server cluster. The reason each node in a cluster needs an IP address that doesn’t change is because they need to know how to find each other.

Replacing dynamic IP addresses with static IP addresses requires:

  • Accessing my router and making changes to the DHCP settings for each server node (which is beyond the scope of this post), and

  • Reflecting those changes to each container, virtual machine, and Pi5 that makes up my server cluster.

Below is a Proxmox VE container where the Edit: Network Device (veth) is open:

NOTE: There are prerequisites that I need to meet before I can build this container.


Downloading an OS to Proxmox VE.

  • I use a browser to login to Proxmox VE.

  • On the left of the screen, under Server View, I go to Datacenter > pve > local (pve).

  • In the 2nd pane, I click ISO Images.

  • In the 3rd pane, I click the grey Download from URL button:

  • In the pop-up modal, I add https://releases.ubuntu.com/24.04.2/ubuntu-24.04.2-live-server-amd64.iso to the URL: field so that Proxmox VE can download the ISO for Ubuntu Server 24.04.2 LTS.

  • I click the blue Query URL button to check the link:

  • I click the blue Download button to start the download:


Initializing Drives for Containers and Virtual Machines.

NOTE: The following is adapted from the instructions provided by the Proxmox VE team.

  • I use a browser to login to Proxmox VE.

  • On the left of the screen, under Server View, I go to Datacenter > pve.

  • In the 2nd pane, I click Disks.

  • In the 3rd pane, I select the dev/sda drive (that currently has 2 partitions).

  • I click the grey Wipe Disk button.

  • In the Confirm modal, I click the blue Yes button.

  • I repeat the process for the /dev/sdb disk.

  • Back in the 2nd pane, I click Disks > LVM.

  • In the 3rd pane, I click the grey Create: Volume Group button:

  • In the Create: LVM Volume Group modal, I select the /dev/sda disk:

  • I name the Group ‘Containers’, ensure the Add Storage: option is ticked, and click the blue Create button.

  • I repeat the process for the /dev/sdb drive, except I name that group ‘VMs‘.

NOTE: The /dev/sda disk is an internal SSD that uses the SATA interface while the /dev/sdb disk is an external HDD that uses the USB 3.0 interface.

Installing a CT Template.

  • I use a browser to login to Proxmox VE.

  • On the left of the screen, under Server View, I go to Datacenter > pve > local (pve).

  • In the 2nd pane, I click CT Templates.

  • In the 3rd pane, I click the grey Templates button.

  • In the Templates modal, I select the ubuntu-24.04-standard template and click the blue Download button:


Running the Helper Script.

  • I copy the script command.

  • I open a local terminal.

NOTE: I can run the following commands from a browser that is logged into Proxmox VE by going to Datacenter > pve > Shell. However, the text is very small, so I am using a local terminal instead.

  • I use the ssh (secure shell) command to login to the remote Proxmox VE:
ssh root@192.168.0.50
  • The password is the same one I use when logging in to the Proxmox VE browser GUI:

  • I paste the helper script command into the terminal:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/post-pve-install.sh)"

NOTE: Answer ‘yes’ to MOST of the questions when asked but there are 3 exceptions, as listed below.

  • I answer ‘no’ to ‘Disable high availability?’:

NOTE: High availability will be used when other nodes are created.

  • I answer ‘no’ to ‘Update Proxmox VE now?‘:

NOTE: I will update Proxmox manually.

  • I answer ‘no’ to ‘Reboot Proxmox VE now?‘:

NOTE: I will reboot once I finish updating the remote system and upgrading Proxmox VE.

  • Once the script returns me to the terminal screen, I update the system:
apt update

NOTE: Sudo is not required as I am in the root account which has full privileges.

  • Once the updates have been downloaded, but not installed, I run the following command to update the system and the Proxmox VE installation:
pveupgrade -y
  • Due to the installation of a kernel update, I will reboot the remote Proxmox VE system:
reboot

NOTE: The reboot command will automatically end the session between my local terminal and the remote Proxmox VE system.

Now that all of the requirements are in place, I can take the next step by using a browser to login to Proxmox VE, clicking the blue Create CT button (top-right of the screen), following the resulting prompts, and creating a container.


Creating a New Account for the Container.

After creating a container by clicking the blue Create CT button (top-right of the screen), and following the resulting prompts, I can now prepare the container for use.

  • I use a browser to login to Proxmox VE.

  • On the left of the screen, under Server View, I go to Datacenter > pve > 101 ServerLab1 (where the CT ID and Hostname are the settings I gave when creating the container).

  • In the 2nd pane, I click Console.

  • In the 3rd pane, I login to the container using root as the ServerLab1 login and the same password I use to login to Proxmox VE.

  • Once logged in, I create a new user account:

adduser <user_name>
  • I add the new user to the 'sudo' group:
usermod -aG sudo <user_name>
  • I exit the root account of the container:
exit
  • Towards the top-left of the console window, I open the drop down menu of the Shutdown option, by clicking the down arrow (⌄), and selecting Reboot:


Logging In from a Remote Terminal.

  • I open a local terminal.

  • I use the ssh (secure shell) command to login to the remote container:

NOTE: My yt account is temporary, is used for generating screenshots, and will be deleted after it has served its’ purpose.


The Results.

Installing Proxmox VE on an Intel NUC 10 provides a compact and efficient solution for managing virtual environments. The process involves preparing the hardware, creating installation media, and configuring the system to suit my network and storage needs. By following the steps outlined, I can effectively set up a robust virtualization platform that supports both containers and virtual machines. This setup not only maximizes the capabilities of the Intel NUC but also offers flexibility and scalability for various computing tasks.


In Conclusion.

In this guide, I walked through the prerequisites, step-by-step installation process, and tips for optimizing my virtual environment setup. From creating a Proxmox VE installation thumb drive to configuring network settings and initializing drives, the result is an Intel NUC 10 that hosts a container that is accessible across my LAN.

I learned how to download an OS to Proxmox VE, install CT templates, and create new accounts for my first container. By following these steps, I maximized the capabilities of the Intel NUC and now enjoy a robust virtualization platform that supports both containers and virtual machines. This setup offers flexibility and scalability for various computing tasks, making it perfect for tech enthusiasts and professionals alike.

Have you tried setting up Proxmox VE on a spare PC? What challenges did you face, and how did you overcome them? Let's discuss in the comments!

Until next time: Be safe, be kind, be awesome.


Hash Tags.

#ProxmoxVE #IntelNUC #Virtualization #Homelab #Containers #VirtualMachines #Networking #ServerSetup #ServerCluster #Linux #Debian #Ubuntu #TechGuide #TechEnthusiast

0
Subscribe to my newsletter

Read articles from Brian King directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Brian King
Brian King

Thank you for reading this post. My name is Brian and I'm a developer from New Zealand. I've been interested in computers since the early 1990s. My first language was QBASIC. (Things have changed since the days of MS-DOS.) I am the managing director of a one-man startup called Digital Core (NZ) Limited. I have accepted the "12 Startups in 12 Months" challenge so that DigitalCore will have income-generating products by April 2024. This blog will follow the "12 Startups" project during its design, development, and deployment, cover the Agile principles and the DevOps philosophy that is used by the "12 Startups" project, and delve into the world of AI, machine learning, deep learning, prompt engineering, and large language models. I hope you enjoyed this post and, if you did, I encourage you to explore some others I've written. And remember: The best technologies bring people together.