Hosting a Go application on a 4$ VPS - Part 1
There was a wave of posts few months ago on X/Twitter about running your applications on a cheap VPS vs managed platforms/services like Fly.io, Render, Heroku, GCP etc. Although there were a lot of comments about how “easy” it is to run something on your own VPS and how everyone should know it, but there were hardly any backing examples of people actually showing how to do it. Hence, I decided to give it a try myself and see how it turns out.
Platform chosen - Digital Ocean. Why? - I just wanted to get started with something easy and convenient, upon searching Reddit and Google Digital Ocean seemed like a decent option. The UI/UX is pretty simple to get up and running as well.
Droplet - 512 MB Memory / 10 GB Disk / SGP1 - Ubuntu24.04 (LTS) x64, 500GB outbound transfer , SGP datacenter, costs about $4/month.
Additional options - Enable IPV6 and droplet metrics agent, both are free.
Based on the above here was my initial plan -
Set up the system first - ssh, required packages, non-root user to run things, check default network accessibility.
Run a simple Go app with a GET endpoint
localhost:8080/api/trivia
. Test it out locally and also over internet.Set up a domain name, serve the api over it and configure a firewall to restrict access to the droplet.
Set up resource limits and alerts.
Initial setup - this is to test our api over the internet with minimal steps
User setup and installations
Set up SSH access for the root user during droplet creation process. SSH with root and set up a sudo user with SSH access. This should be the primary user for accessing the server from here-on and running the application. Ref - user setup
SSH with the created user and try out some commands.
Install
go
and create a simple application with a GET endpoint.
Serving the API over internet
Install nginx
to access the endpoint over internet. Currently we will access it directly via the droplet’s public (Don’t share the public IP with anyone). Later on once we have a domain registered we will access the api against it.
sudo apt install nginx -y
Create a nginx configuration for serving your api - /etc/nginx/sites-available/goapp.conf
, there should already be a default
file located inside this folder.
server {
listen 80; # This makes Nginx listen on port 80
server_name <your_droplet_ip>; # Use your droplet's public IP address
location / {
proxy_pass http://localhost:8080/api/trivia; # Proxy requests to your Go app running on port 8080
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Setup symlink for nginx to read this config - sudo ln -s /etc/nginx/sites-available/goapp.conf /etc/nginx/sites-enabled/
Verify the config - sudo nginx -t
and reload nginx sudo systemctl reload nginx
Logging utility options -
Go to /etc/nginx/nginx.conf
and update the default logging setting in http
block
## # Logging Settings ## log_format custom_headers_log '[$time_local] "$request" ' 'Host: "$host", ' 'Origin: "$http_origin", ' 'Referer: "$http_referer", ' 'User-Agent: "$http_user_agent", '; access_log /var/log/nginx/access.log custom_headers_log;
You can check the access and error logs via sudo tail -f /var/log/nginx/access.log
and sudo tail -f /var/log/nginx/error.log
Firewall setup
We’ll use ufw to restrict network access to our droplet. By default ufw denies all incoming traffic and allows all outgoing traffic. Presently, we need to allow HTTP traffic over port 80 so nginx can serve our api over the internet.
sudo ufw allow OpenSSH
- Important - to keep SSH open through firewall on default port 22.sudo ufw allow in 80
sudo ufw enable
sudo ufw status verbose
- check detailed rules.
With this setup we should be able to access our api at - http://<your_droplet_public_ip>/api/trivia
Making things practical
Ideally we would want to host and access our application/api over some domain name. We would also want to enforce stricter access policies to the server and use a secure protocol like https for internet communication.
Domain -
I went ahead and bought a domain noyap.foo via Cloudflare. As I intended on using Cloudflare pages anyway to host the FE piece and wanted to keep things at a single place and simple. This basically makes Cloudflare both my DNS registrar and resolver, which is termed as a “full setup” by Cloudflare. It turned out to be a good decision as having a “full setup” unlocks lot of additional features in cloudflare - proxying requests, caching, secure tunnel setup etc.
The setup might look slightly different if you have purchased your domain from some other provider.
Once you have the domain we need to link it to our application/server so that Cloudflare can route requests to it over the internet.
In the DNS Records section of Cloudflare create 2 entries for the same -
‘A’ record for your root domain -
| Type | Name | Content | TTL | Proxy status | | --- | --- | --- | --- | --- | | A | @ | <your_droplet_public_ip> | Auto | Proxied |
CNAME record for the "www" subdomain -
Type | Name | Content | TTL | Proxy status |
CNAME | www | <your_domain>, ex - noyap.foo | Auto | Proxied |
Once you save these records and they are propagated in a while you can check on the dns resolution with commands like dig
or nslookup
Networking setup
Once we have the domain, we would want to also use a secure connection protocol to relay the traffic between the user to our app. The interesting piece is with Cloudflare in the middle acting like a proxy for the user to route traffic to our origin server, so it allows us to define different ssl modes. It might be okay to connect over HTTP between Cloudflare to your origin server in some cases. I have chosen Full (Strict) mode for learning purposes. You can check out more here - https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#custom-ssltls
As Cloudflare provides free Origin CA certificates, I went ahead with it finally, to keep things simple. You can create one in the SSL/TSL section of your Cloudflare dashboard. Ref - https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/, You have to copy and store the certificates in your server’s /etc/ssl
folder.
Nginx configuration to only accept HTTPS traffic -
server {
server_name <domain> www.<domain>;
location / {
proxy_pass http://localhost:8080/api/trivia; # Your Go app's address
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl;
ssl_certificate /etc/ssl/certs/cloudflare.pem;
ssl_certificate_key /etc/ssl/private/cloudflare.key;
}
Also, we would want to block any access to our application via direct IP and reroute HTTP calls to HTTPS, for which we can add the below pieces.
server {
listen 80;
listen 443 ssl;
server_name <your_droplet_ip> # Public ip
ssl_certificate /etc/ssl/certs/cloudflare.pem;
ssl_certificate_key /etc/ssl/private/cloudflare.key;
return 444; # No response for direct IP access
}
server {
if ($host = www.<domain>) {
return 301 https://$host$request_uri;
}
if ($host = <domain>) {
return 301 https://$host$request_uri;
}
listen 80;
server_name <domain> www.<domain>;
return 444;
}
Run sudo nginx -t
and sudo systemctl restart nginx
for these to take effect.
Also, we will need to update our ufw config to deny incoming traffic on port 80 HTTP now and allow on 443 for HTTPS. For this we can first run sudo ufw status numbered
followed by sudo ufw delete <rule_number_for_allow_in_port_80>
to delete the incoming HTTP traffic rule.
#!/bin/bash
CLOUDFLARE_IPV4=(
"173.245.48.0/20"
"103.21.244.0/22"
"103.22.200.0/22"
"103.31.4.0/22"
"141.101.64.0/18"
"108.162.192.0/18"
"190.93.240.0/20"
"188.114.96.0/20"
"197.234.240.0/22"
"198.41.128.0/17"
"162.158.0.0/15"
"104.16.0.0/13"
"104.24.0.0/14"
"172.64.0.0/13"
"131.0.72.0/22"
)
CLOUDFLARE_IPV6=(
"2400:cb00::/32"
"2606:4700::/32"
"2803:f800::/32"
"2405:b500::/32"
"2405:8100::/32"
"2a06:98c0::/29"
"2c0f:f248::/32"
)
# Allow HTTPS (port 443) only from Cloudflare IPs
for ip in "${CLOUDFLARE_IPV4[@]}"; do
sudo ufw allow from $ip to any port 443
done
for ip6 in "${CLOUDFLARE_IPV6[@]}"; do
sudo ufw allow from $ip6 to any port 443
done
sudo ufw reload
This should set up access to our application over HTTPS, which is routed via Cloudflare securely to our Digital Ocean droplet and served via the NGINX server running on it. You can test it by hitting - https://<your_domain> or https://www.<your_domain> on the browser or by making a curl request.
Additional - setting up resource alerts
It’s probably also a good idea to set up resource usage alerts for your droplet. It’s pretty easy to set one up in the Monitoring section on the Digital Ocean dashboard. Another option is to also set Uptime checks for your application, which is also available. Usages and charges can be tracked in the Billing section which also shows Outbound data transfer amount. For a $4 droplet the allocated amount is 500GB/month beyond which it is charged at a defined rate, hence something to keep an eye on.
That’s it for this post. If you have read till here then consider reading the 2nd part of it (will be published later) which is to deploy a react application with Cloudflare pages and tie it together to the api hosted in the droplet. You can check out the final output here - https://noyap.foo. Why the name? Because it’s tiring to come across meaningless yapping on public forums. Good choice? Idk
Subscribe to my newsletter
Read articles from Vivek Dehariya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Vivek Dehariya
Vivek Dehariya
Well I can Google efficiently.