Getting Started with Falco Security Tool on GKE


(Not sure why I only got half an image :shrug AI image creation continues to amaze me….)
As you can probably tell from a lot of my previous posts, I’ve been having a lot of fun with Kubernetes, I’m currently trying my hand at the Certified Kubernetes Security Specialist, known as the most challenging Kubernetes exam of them all.
Which is why I’m going to do a bit of a quick start write up getting started with installing and using Falco on GKE.
What is Falco?
Falco is a cloud native security tool that provides runtime security across hosts, containers, Kubernetes, and cloud environments. It is designed to detect and alert on abnormal behaviour and potential security threats in real-time.
It’s essentially a real time monitoring tool that alerts against preconfigured rules and custom rules configured by us administrators.
Falco deploys with some preconfigured rules that check the Linux kernel for any unusual behaviour, including but a few:
Privilege escalation using privileged containers
Executing shell binaries such as
sh
,bash
,csh
,zsh
, etcExecuting SSH binaries such as
ssh
,scp
,sftp
, etcRead/Writes to well-known directories such as
/etc
,/usr/bin
,/usr/sbin
, etc
This is a brief overview. You can find more info on what Falco is and why at the Falco docs site.
Installing Falco on GKE
Falco is fairly easy to get started with, you can install it on a VM, compute instance or Kubernetes Cluster. You’ll find a quick start style tutorial on the Falco Getting Started page.
I’ve gone for a GKE cluster. I had one already up and running, so I’ve opted to install Falco on the cluster using Helm. You can use a Linux VM or any other type of cloud managed Kubernetes provider.
kubectl create ns falco
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco \
-n falco \
--set tty=true \
--set driver.kind=ebpf \
falcosecurity/falco
I’ve used the —set driver.kind
set to ebpf as I’m using in a GKE cluster and I’m not able to load a kernel module on my GKE cluster.
The Helm chart will then deploy Falco as a DaemonSet, meaning that a pod running Falco will run on each node of the cluster.
Testing Alerting
Let’s try out Falco and see the pre-configured rules in action!
Let’s spin up a pod and have it do something to trigger a rule.
kubectl run pod test --image nginx
Let’s have the test pod do something “unusual”.
We can check the logs on the Falco pods.
kubectl -n falco logs -c falco -l app.kubernetes.io/name=falco
I’ve cut off the top of the logs so you can see the logs from the commands I just ran:
There’s a lot of info there, but what we’re looking for is timestamps, the pod that was alerted on and what the pod to alert.
10:14:56.930581844: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=sh proc_exepath=/usr/bin/dash parent=containerd-shim command=sh -c ls -la terminal=34816 exe_flags=EXE_WRITABLE|EXE_LOWER_LAYER container_id=9f03a0d2a1ed container_image=docker.io/library/nginx container_image_tag=latest container_name=test k8s_ns=default k8s_pod_name=test)
10:15:10.218050374: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=0 user_loginuid=-1 process=sh proc_exepath=/usr/bin/dash parent=containerd-shim command=sh -c ls -la /root terminal=34816 exe_flags=EXE_WRITABLE|EXE_LOWER_LAYER container_id=9f03a0d2a1ed container_image=docker.io/library/nginx container_image_tag=latest container_name=test k8s_ns=default k8s_pod_name=test)
For example, the logs we have above the very last part tell us the container and pod:
The start is the message “Notice A shell was spawned in a container with an attached terminal“ (We’ll use this later!).
But look into the log message more, and we can see what actually triggered the alert:
Essentially, the container “test” in the pod named “test” ran the “ls -la” and in another instance “ls -la /root“.
It's not ideal! That could signify a bad actor trying to work their way around a pod and potentially gaining access to the underlying node and our wider infrastructure. So its a good thing we have an rule triggering this event!
Logging and alerting Falco in Google Cloud Monitoring
Now with this running. In GKE, wouldn’t it be nice not to have to remember to go trawling through the logs everynow and then just to know what’s happening in our cluster? The answer is yes!
Google Cloud comes ready with a very comprehensive logging and monitoring suite of tools that we can make use of to alert on the content of a log message. GKE integrates with Google Cloud Logging out of the box so we should definitely make use of it, let’s take a look how.
Falco logs are output using stdout, so with the logging capabilities of GKE, they appear in Google Cloud Logging from the “Workloads” page in the GKE cluster. I can choose “Falco” and look at the logs produced from the pods:
What’s neat is that you can fine tune your Logging query to find the logs you are interested in right now, clicking the “View in Logs Explorer“.
Now in Logs Explorer, we can see the LQL query and make some changes to find the log content we’re interested in:
resource.type="k8s_container"
resource.labels.project_id="gcp-project-id"
resource.labels.location="europe-west2-b"
resource.labels.cluster_name="cks-cluster"
resource.labels.namespace_name="falco"
labels.k8s-pod/app_kubernetes_io/instance="falco"
labels.k8s-pod/app_kubernetes_io/name="falco" severity>=DEFAULT
textPayload:"Notice A shell was spawned in a container with an attached terminal"
In the last hour, we can see the logs (plus another I did!) that Falco alerted on from shell sessions spawned in pods on the cluster, reducing the noise to specify the testPayload to search for logs with the following testPayload “Notice A shell was spawned in a container with an attached terminal".
Much easier to find what we’re looking for!
With this log query, we can also create a log based alert, super simple! Click the “Actions” button and then “Create log alert“.
Give the alert a name, it will grab our log query and we set the frequency and the channel to notify.
Let’s fire off some more Kubectl exec and create some alerts……
That was quick!
Let’s look at the actual incident that’s been created.
In the incident, you can view the log/s that caused the alert and you can pop out to the Google Logs Explorer page again.
And the email alert got sent, so if I wasn’t staring at the monitoring dashboards, I certainly know about it now!
The alert also doesn’t have to be an email, Google CLoud Monitoring has a choice of notification channels, could be Google Chat, Pager Duty for the really important alerts, or PubSub for something completely different.
As a former oncall SRE, all I ask is that you alert and wake up engineers for something serious that they need to be present for!
Creating custom rules
Now, lets say we have some scenarios which are not covered by the rules that come configured in Falco? Thats where custom rules come in. We can create our rules for Falco to alert on.
The default Falco configuration will load rules from /etc/falco/falco_rules.yaml
, /etc/falco/falco_rules.local.yaml
and /etc/falco/rules.d
.
My current deployment of Falco (and yours if your following along…) was done via Helm, so it’s a case of creating a custom rule yaml file and updating the Helm deployment.
I borrowed this from the Falco quick start to get going with some minor changes:
customRules:
custom-rules.yaml: |-
- rule: Write into etc
desc: An attempt to write to the /etc directory
condition: >
(evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0)
and fd.name startswith /etc
output: "Stop what your doing and look at this!! File below /etc opened for writing (file=%fd.name pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline terminal=%proc.tty %container.info)"
priority: WARNING
tags: [filesystem, mitre_persistence]
As a brief overview of the rule we’re creating here, in the condition, we’re stating that the event type of the syscall open, openat or openat2 and the event type Macro used is opened for writing (evt.is_open_write equaling true), which starts with /etc
.
The output section then determines what is logged out to what we see in the Falco pod logging and also in this case, Google Cloud Logging. You can use event fields for the outputs to make things easier to interpolate rather than hardcoding. The documentation is quite comprehensive.
Finally tags, these are optional but handy for organising rules into categories. Example, you could you could tell Falco to skip all rules with a particular stage in a dev environment. Tags are a handy way of controlling the rules in use
Now to update the Helm deployment to include the custom rules yaml file with the Falco deployment.
helm upgrade --namespace falco falco falcosecurity/falco --set tty=true -f custom_rules.yaml
Wait for the pods to restart.
Let’s trigger an alert by exec’ing into the test pod and try to write to /etc
Checking the logs in the Falco pod or in Google Cloud Logs Explorer shows the attempt:
Screenshots are hard to read….. Here’s a copy paste of the log that was output to Google Cloud Logging.
2025-05-12 11:14:47.152 BST
10:14:47.147029349: Warning Stop what your doing and look at this!! File below /etc opened for writing (file=/etc/test.txt pcmdline=sh -c touch /etc/test.txt gparent=containerd-shim ggparent=systemd gggparent=<NA> evt_type=openat user=root user_uid=0 user_loginuid=-1 process=touch proc_exepath=/usr/bin/touch parent=sh command=touch /etc/test.txt terminal=34817 container_id=71ee5d3c8123 container_image=
docker.io/library/nginx
container_image_tag=latest container_name=test k8s_ns=default k8s_pod_name=test)
You can see the custom message I added to output “Stop what your doing and look at this!!“
More info on creating custom rules here.
We’re just scratvhing the surface on making the Falco rules work for us, there are further more advanced things you can do that the documentation points to in more depth here, including how to override, add exceptions and writing your own rules etc.
Summary
Thats it for this quick intro into getting Falco up and running in a GKE Cluster.
To summarise, we had a GKE cluster up and running and we deployed Falco using helm. Any Kubernets Cluster or even a Linux VM should be fine to use. Your install/deployment method will vary.
Once running we tested the default built in rule engine by running a simple nginx pod and tried to sh exec commands reading and writing to root directories.
We then checked the Falco logs for alerts and then created a log based alert in Google Cloud Monitoring, to alert us when a rule had been triggered.
We then checked out custom rules.
I hope you found this useful, feel free to comment with any of your findings or thoughts!
Useful Links
Subscribe to my newsletter
Read articles from Ferris Hall directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ferris Hall
Ferris Hall
I’m a Google Cloud certified Platform Engineer and a Google authorized trainer. Linux sysadmin background now working in the Google cloud platform. I'm passionate about building and deploying infrastructure systems, automation, driving change and empowering people in learning and development. An Authorised certified Google Cloud Trainer and enjoy sharing what I have learnt, best practices, Google Cloud and general DevOps with people getting started on their journey.