4 of 10: Installing Docker Desktop (deprecated).
Table of contents
- TL;DR.
- An Introduction.
- The Big Picture.
- Creating, and Using, RSA Keys.
- Installing Docker Desktop from the Repo.
- Deleting Docker Desktop.
- ALTERNATE INSTALL: Downloading the Docker Desktop File.
- ALTERNATE INSTALL: Installing Docker Desktop.
- Post-Docker Desktop Installation.
- Preparing the Remote Docker Container for Docker Desktop.
- The Results.
- In Conclusion.
Homelab | LXD Manager | Docker | Docker Desktop | Deno | MariaDB | Portainer | More Docker | Docker Swarm | CrowdSec
TL;DR.
Setting up Docker Desktop on a local Ubuntu system, and preparing a remote Docker container, enables benefits like isolation, portability, and scalability. Utilizing technologies like Docker, Docker Desktop, and Portainer within an LXC environment provides a versatile platform for running "12 Startups" experiments.
An Introduction.
My previous post in this 8-part mini-series covered how I installed Docker within a remote container. This time, I'm going to show how I install a local Docker Desktop that connects to the remote Docker container.
The purpose of this post is to present a process for connecting the Docker Desktop to a remote Docker container.
The "12 Startups in 12 Months" challenge has helped me focus on my ambitions. As someone who enjoys SysOps, I'm happy to tinker away on my PC and NUC in my "office" (which is, technically, a hallway). As someone who only has 10 Months left on his "12 Months" challenge, I'd better make my dabbling count.
The Big Picture.
When I look at what I want to achieve from a high-altitude perspective, I've noticed discrepancies between what I want to achieve and my current efforts. The reality is: I don't see Docker Desktop playing a significant part in my stack. However, this post may help others to achieve their goals.
Anyway, the first step to connecting to the Docker container is to set up an SSH (Secure SHell) connection.
Creating, and Using, RSA Keys.
These steps will enable the local workstation
to create an SSH connection to the remote Docker container. The purpose of this setup is to do away with username and password combinations and replace them with public key encryption.
1/4 - Creating an RSA Key Pair on the Workstation.
- From the
workstation
terminal (CTRL
+ALT
+T
), I start the ssh-agent:
eval "$(ssh-agent -s)"
- I generate an RSA key pair called "/home/brian/.ssh/docker":
ssh-keygen -b 4096
- I add my SSH private key to the ssh-agent:
ssh-add /home/brian/.ssh/docker
2/4 - Uploading a Public Key to the Remote Container.
- From the
workstation
terminal, I use "ssh-copy-id" to upload the locally-generated public key to the Docker container:
ssh-copy-id -i /home/brian/.ssh/docker.pub yt@192.168.?.?
NOTE: I replace the "?" with the actual octet for the container.
3/4 - Logging In to the Remote Container.
- From the
workstation
terminal (CTRL
+ALT
+T
), I log in to the “yt” account of the Docker container:
ssh 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
4/4 - Disabling Password Authentication.
- From the
workstation
terminal (CTRL
+ALT
+T
) connected to the Docker container, I open the "sshd_config" file:
sudo nano /etc/ssh/sshd_config
I edit, and save, the following "sshd_config" settings:
PasswordAuthentication no
PermitRootLogin no
Protocol 2
NOTE: Another change I typically make is switching out the default port number of 22 for something less obvious, e.g. 4444 (which is also very obvious so don't use port 4444):
port 4444
- I restart the "ssh" service:
sudo systemctl restart ssh.service
- I reboot the remote container:
sudo reboot
NOTE: Running the
exit
,sudo reboot
, orsudo poweroff
commands will close the connection to the remote host.
- Finally, I test the connection to the remote container:
ssh -p '4444' 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
Now that I have an SSH connection to the remote container, the next step is to set up the Docker Desktop repository on my workstation
system.
Installing Docker Desktop from the Repo.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I run the updates and upgrades:
sudo apt clean && sudo apt update && sudo apt dist-upgrade -y && sudo apt autoremove -y
- I install the prerequisites:
sudo apt install -y ca-certificates curl gnupg lsb-release
- I make a keyrings directory:
sudo mkdir -p /etc/apt/keyrings
- I download the Docker key to the keyrings directory:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- I download the Docker repository entry:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- I update my local repo entries:
sudo apt update
- I install Docker:
sudo apt install docker
Deleting Docker Desktop.
- From the
workstation
terminal (CTRL
+ALT
+T
), I use theapt
command to remove docker-desktop:
sudo apt remove docker-desktop
- I recursively remove the hidden Docker Desktop directory (while leaving the Docker directory intact):
rm -r $HOME/.docker/desktop
- I remove the Docker CLI:
sudo rm /usr/local/bin/com.docker.cli
I use the apt command to purge Docker Desktop from the system:
sudo apt purge docker-desktop
ALTERNATE INSTALL: Downloading the Docker Desktop File.
- From a browser, I download the Docker Desktop
deb
file to the Download directory:
https://www.docker.com/products/docker-desktop/
- For non-Gnome desktops, from the
workstation
terminal (CTRL
+ALT
+T
), I install the following:
sudo apt install gnome-terminal
ALTERNATE INSTALL: Installing Docker Desktop.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I change to the Download directory:
cd ~/Download
- I update the repo list:
sudo apt update
- I install Docker Desktop:
sudo apt install ./docker-desktop-<version>-<arch>.deb
NOTE: I ignore the error message after
APT
completes the installation:N: Download is performed unsandboxed as root, as file '/home/user/Downloads/docker-desktop.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
Post-Docker Desktop Installation.
- I launch Docker Desktop from the Applications menu, or by running the following command in a terminal (
CTRL
+ALT
+T
):
sudo systemctl --user start docker-desktop
I check the local Docker installation with these two commands:
docker compose version
docker --version
- I enable Docker Desktop to start on boot (Settings > General > Start Docker Desktop when you log in):
systemctl --user enable docker-desktop
- I stop Docker Desktop from the Docker menu (select Quit Docker Desktop) by running the following:
systemctl --user stop docker-desktop
NOTE: When a new version of Docker Desktop is released, all of these steps will need to be re-run.
Preparing the Remote Docker Container for Docker Desktop.
Thanks to the private keys I installed on my workstation
and pushed across the LAN, I now have a password-less connection to the Docker container running on my remote homelab
. This opens the way for exploring multiple deployment options, remote access solutions, and distant distribution practices.
Preparing the Remote Docker Service.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I log in to the Docker container:
ssh -p '4444' 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
- I add a new group (which probably already exists):
sudo groupadd docker
- I add my (current, logged-in) account to the group:
sudo usermod -aG docker ${USER}
- I reboot the remote container:
sudo reboot
Running the "hello-world" App.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I log back into the Docker container (which now recognises my account as a member of the docker group):
ssh -p '4444' 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
- I run the "hello-world" app without the
sudo
command, ensuring everything works:
docker run hello-world
- One way of fixing a "permission denied" error is to delete the hidden
.docker
directory (which will rebuild itself with the correct permissions):
sudo rm -R /home/${USER}/.docker
Running the "getting-started" App.
- From the
Workstation
terminal (CTRL
+ALT
+T
) connected to the Docker container, I run the "getting-started" app:
docker run -d -p 80:80 docker/getting-started
- I open a browser and visit the "getting-started" app that is now running within the Docker container on port 80:
http://192.168.?.?:80
NOTE: I replace the "?" with the actual IP address for the container.
If the tutorial successfully loads in a browser, then it's time to install Portainer within the remote container.
Installing Portainer in the Remote Container.
Portainer is a web-based user interface for managing Docker environments. It can be used to manage Docker containers, images, networks and volumes.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I log in to the Docker container:
ssh -p '4444' 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
- I create a Docker volume to store the Portainer configuration data:
docker volume create portainer_data
- I pull the Portainer image from Docker Hub:
docker pull portainer/portainer-ce
- I run the Portainer container with the following command:
docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
Accessing Portainer Locally.
- I open a browser on my local
workstation
system, point it to the IP address for the remote container, and access Portainer on port 9000:
http://192.168.?.?:9000
NOTE: I replace the "?" with the actual IP address for the container.
After setting up the remote Portainer account, I now need to enable the API. This API is what Docker Desktop will use to connect to the remote container.
Enabling the Docker API.
- From the
Workstation
terminal (CTRL
+ALT
+T
), I log in to the remote container:
ssh -p '4444' 'yt@192.168.?.?'
NOTE: I replace the "?" with the actual IP address for the container.
- I use Nano to open the
docker.service
file:
sudo nano /lib/systemd/system/docker.service
- I comment out (#) the existing
ExecStart
line, replace it with the following, save the changes, and exit Nano:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://192.168.?.?:2375
NOTE: I replace the "?" with the actual IP address for the container.
- I reload the Docker daemon:
systemctl daemon-reload
- I stop Docker:
sudo systemctl stop docker.service
- I start Docker:
sudo systemctl start docker.service
- And I check the status:
service docker status
Once Docker successfully boots, I will connect the local Docker Desktop app to the remote Docker service running in the remote container.
Installing the Portainer Extension in Docker Desktop.
On the
workstation
system, I run Docker Desktop.In the left panel, I click the
⊕ Add Extensions
button.In the Browse tab I
Search
for, and install, the Portainer extension.I open the Potainer extension.
NOTE: A local instance of Docker, the one associated with Docker Workbench, will be listed here.
I click the
Environments
logo in the middle console (immediately under theHome
logo near the top. I can expand the middle panel by clicking the right-pointing chevron (>>) at the top, the one that looks like a fast-forward icon.)I click the blue
+ Add environment
button.I select the
Docker Standalone
option and click the blueStart Wizard
button at the bottom of the window.I select the
API
option,Name
the environment "docker", specify theDocker API URL
as "192.168.188.[?]:2375", and click the blueConnect
button at the bottom of the page.The remote
docker
environment loads into the main panel, and can be closed with thex
symbol of the new panel that has loaded into the middle console.To re-connect to my remote container, I click the Portainer extension on the left, and click the blue
Live connect
button for thedocker
environment.
The Results.
This post demonstrates the process of setting up Docker Desktop on an Ubuntu system and preparing a remote Docker container for its use. By utilizing technologies like Docker, Docker Desktop, and Portainer within an LXC environment, I can achieve benefits such as isolation, portability, and scalability. As I continue building my 12 Startups challenge, these components will come together to form a cohesive tapestry, enabling me to create versatile and efficient applications.
In Conclusion.
The advantages of building, and running, a Docker service within a container are numerous: Isolation and separation, portability and scalability, versioning and security, but most importantly, the services that run in containers are still accessible from the outside world.
I'm gradually assembling the components that I'll need to build "12 Startups". Each element in my constellation resembles an individual jigsaw puzzle piece. Docker, Docker Desktop, Portainer, and the technologies I've covered (and will cover) in other posts should eventually coalesce around the gravitational force of my code. As the pieces come together, I'll see a tapestry of coherence slowly unfold before my eyes. I'm looking forward to sharing more of my process with you, and I'm excited about, and look forward to, these pieces coming together.
Be sure to check out my next post in this series where
Until next time: Be safe, be kind, be awesome.
Homelab | LXD Manager | Docker | Docker Desktop | Deno | MariaDB | Portainer | More Docker | Docker Swarm | CrowdSec
Subscribe to my newsletter
Read articles from Brian King directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Brian King
Brian King
Thank you for reading this post. My name is Brian and I'm a developer from New Zealand. I've been interested in computers since the early 1990s. My first language was QBASIC. (Things have changed since the days of MS-DOS.) I am the managing director of a one-man startup called Digital Core (NZ) Limited. I have accepted the "12 Startups in 12 Months" challenge so that DigitalCore will have income-generating products by April 2024. This blog will follow the "12 Startups" project during its design, development, and deployment, cover the Agile principles and the DevOps philosophy that is used by the "12 Startups" project, and delve into the world of AI, machine learning, deep learning, prompt engineering, and large language models. I hope you enjoyed this post and, if you did, I encourage you to explore some others I've written. And remember: The best technologies bring people together.