Kubernetes 101: Part 4

We know that kubeapi has been guiding from master node till now and we need to make sure it’s secure.

Basically one using kubeapi might need to focus on these two permissions

Who can access is - Authentication and what can they do is - Authorization

Authentication

Our kubernetes cluster is accessed by admins, developers, end users, bots etc.

But whom to give what access? Let’s focus on administrative processes

Here Users are the admins and developers and other service accounts are basically bots.

You can’t create users but you can create service accounts

Let’s focus on Users

So, how do they access the cluster?

All the request goes to kubeapi server and then kubeapi server authenticate the user and then process it.

But how do they authenticate?

It can authenticate using these.

Static Password File:

First of all, there should be a csv file consist of password, username and user id. Then the file path should be passed to kube-apiserver.service (—basic-auth-file=user-details.csv)

We can also do that using curl command

Static token file:

You can create a csvfile with tokens, users, group name

Then pass this file name to kube-apiserver.service (Add —token-auth-file = <csv file>)

TLS Certificate

Now, we will create certificates for our cluster. Let’s use openssl today

Firstly we generate the ca key, then send a signing request and then sign it using CA

Now we have ca.key(CA’s private key) and ca.crt (certificate)

Note: CN is the name we need to keep in mind.

Then we generate key and certificate for the admin user. The certificate was signed using ca.key

We can make this account different by adding it to system:master group and sign the certificate for it

Then for kube scheduler which is part of the system

Then controller manager part of system,

Finally, kube-proxy

So, by now we have created these (Green ticked mark)

Other two will be generated when we are generating server certificate. So, how to use them?

For example, for the admin:

We can use curl or kube-config file and mention the certificates, keys and others to get the administrative privilege now

Note: Keep in mind that all of the certificates need ca.crt to validate their certificates

Let’s create server certificates

We firstly create etcd servers’s certificate and then for availability,we can use multiple peers in different zones. We can create certificate for them as well (etcdpeer1.crt, etcdpeer2.crt)

Then we need to add those in the etcd.yaml file

For kube-api server, we use this server with various alias /names like kubernetes, kubernetes.default, IP adress of the kubeapi, host IP of the kubeapi etc.

To generate the key for kube-api, firstly create a key and then create a signing request

But we have various names for kube-api as used by an user. We can specify all of those in the config file.

Instead of previous command line script, use this

Then generate the certificate

Also, we need to keep in mind of the API Client certificates that are used by the API server while communicating as a client to the ETCS and kubelet servers.

Then add all of these to the config file

Then we need kubelet server certificate as they are going to be in the nodes (node01,node02, node03)communicating with the kube-api servers.

Then we need to configure for each node. Here done for node01,

Also, we will create a client certificate to communicate to kube-api server

Again, once done we set the paths of these certificates to the kube-config file

Debug Certificate details

Assume that you have issues in your certificates and need to solve it. We have 2 ways to configure certificates.One is manual work and another one is using yaml file.

Assuming we have done our certificates task on kubeadm, let’s create an excel to list down certificates and their issuer and all.

Now, we can check the certificate file list and

then manually inspect each file path

For example, let’s try and check this apiserver.crt file using openssl x509 -in <certificate file path> -text -noout

Here we can see Issuer, expiration date, alternative name all.

In this way, we can fill this excel and find issues like fake issuer, expiration etc.

Then we can solve it manually.

Also, we can check logs to find issues using journalctl:

using kubectl:

or using docker:

Certificates API

Now assume you are the only admin who has access to all of the certificates

and here comes a new admin. She will create her private key and send me a request to sign a certificate. I have to log into the CA server to sign the certificate.

Once done, she has her private key and valid certificate to access the cluster

The certificate for her has expiry period and every time it expired, the admin sends a request to sign a certificate again for access.

Remember, we created a pair of key and certificate which is called CA. Once we keep it on a secured server, it becomes our CA Sever.

To sign new certificates, we need to log into the CA Sever.

In kubernetes, master node keeps the CA certificate and key paid . So, it acts as the CA server.

By now, we have seen how we have to manually sign certificates for a new admin and every time we had to log into the CA server for that. Once the certificate expires, we have to repeat it again. How to automate it?

We can automate it using CERTIFICATES API. now the the request will be saved in CA server as an object and the admin of the cluster can review it and approve it.

Finally it can be extracted and share with users

How to do it?

The new admin first creates a key

Then generates a certificate signing request using the key with admin’s name in it. (Here, Jane)Then the new admin send it to the administrator. The admin encodes it.

The administrator takes the encoded key and creates an object. It looks like any other object

Once the object is ready, the administrator can check all the signing request and approve it

One can view the certificate using kubectl get csr <certificate request> -o yaml

You can also decode the key and check real key.

Keep in mind that Control manager usually does all of these for us

How to verify? We have seen if one wants to sign a certificate, the person needs certificate and private key. kube-controller-manager has both pf them.

Hands-on:

We can check the csr (certificate request file created by akshay). Basically this csr has akshay’s private key.

Then we as an administrator, encode that and paste this to create a certificate object

We have created a yaml file and pasted the encoded key

Let’s create the object using

kubectl apply -f akshay-csr.yaml

We can see the status of the signing request

Let’s approve that

Assume that , we have a new CSR named agent-smith

Let’s check it’s details and encoded key. It has requested access to system:masters

So, we will reject it.

We can also remove the csr object using

kubectl delete csr agent-smith

Kube-config

Assume that a user called my-kube-playground . He sent a request to the kub-api server while passing admin.key, admin certificate, ca certificate.

Then the apiserver validates them and gives access.

There are no items here though.

We can do that using kubelet

But adding all of these info every time is very hard. So, we use kubeConfig file and keep all of those server, client key, client certificate, certificate-authority info

Then while using kubelet, we can just call the —kubeconfig file and here we go!

Let’s explore more about the config file

It generally has existing clusters info,existing users which uses those cluster and context. Context provides info about which user will get access to which cluster.

So, for our example, those lines we wrote in the config file will go

Like this:

Basically this config file is in yaml format

So, in this case , our config.yaml file might look like this

We can do it for all users, all clusters and contexts

Now, to view the file, use

We can also change default context, (Here we set context to prod-user@….)

Also, we have an option to move to a namespace when we choose a particular context,

We just need to set the namespace within the contexts.

API Groups

We have two api groups in kubernetes

The core group has all of the main functionality files

The named group has all of the organized ones

If we check them more, you will see more groups under the named one.

We can see options for deployments like create, get , delete etc.

There is a way to access KubeApi server in a better way and that’s by creating a proxy server.

You can pass all of the certificates and pass them to kube-api-server in-order to get things done,

First create the kubectl proxy server(an HTTP proxy service created by kubectl to access kube-apiserver) at port 8001

kubectl proxy

Then check all of the api groups

Keep this picture in mind

So, when we are creating a deployment, we mention apiVersion as apps/v1 to get access to resources.

Authorization (What to access)

As an admin, we want to have all file access

But for the developers and other services (bots) surely we don’t want to give edit access or others.

Now, we know master node has KubeAPI server and the request can come from an outsider or from another kubelet residing in a node.

How to define the kubelet? We mentioned earlier that these kubelet certificates etc should be part of a group called SYSTEM. Basically they are part of system nodes . Then node authorizer checks that and allows kubelet to communicate to kube-api.

What about external users?

To give access , we create e policy file and give the access in this format. Then we pass this file to the api-server’s config file

But for more users, more lines in the json file

So, it’s becomes tougher

Instead we use RBAC (Role based access control)

Just create a role with all access needed and associate the user to the role

Webhook

If we want to use 3rd party tool like (Open policy agent), then if a user asks to get access to kube API, the agent will let us know if we should give access to them or not.

Here you can see kube-api passed the request to open policy agent to verify and it answered yes!

Always Allow mode

By default, this is set as AlwayAllow

But one can specify what modes they want to use

But why so many modes?

Because if one denies the request, then another mode gets the request and checks it. Here node denied the user access and passed to RBAC which verified that the user actually has access to role. Then provided access.

Role Based Access Controls (RBAC)

A role like “developer” might look like this

The role name is “developer”. Here we want to give access to pods, and want to let them check the list of pods, create, update , delete etc.

Then we link the user to the role

Here subject has the user details and roleRef has the details of the role file

Then create the binding

Note: Here we are dealing all in the default namespace

You can verify the roles, role-binding and others

You can also check your access to do things like creating deployment, deleting nodes etc

Also, you can check that for a user

Here the dev-user does not have access to create deployments but can create pods

Cluster roles

In general the role are created and connected on the default namespace unless you change it

We can keep resources within different namespaces but, we can’t keep nodes in different namespaces. Nodes are part of cluster.

There are mode namespaced and cluster scoped resources, check them using this command

But we can also give access to cluster oriented resources through clusterrole

Here,we have given an example of creating cluster admin role and resources are nodes.

Note: Nodes are part of cluster and varies from cluster to cluster.

Then we bind the role to the user

Then apply it

Service accounts

There are two types of accounts

User account used by humans and service accounts to interact with resources

For example, assume we have a page which loads all pods list

To do that we needed service accounts which were authenticated and contacted kubeapi to get the pod list and then published them in the page.

To create and check the service account, use these

Also, when a service account is created, a token (Tokens) is automatically created. So, firstly an object named “dashboard-sa” got created and then the Token got created.

It then created a secret object and stored the token

The secret object is then linked to the token. So, if we use the token and use describe, we can see the secret associated with it.

We can then use this secret token as a bearer token for authorization header to contact to kubernetes API

For our custom page, we can just copy paste that in the token field

Again, there is a default service account created for every namespace.

That is automatically mounted as a volume mount. How to know that?

For example, assume we have created a container using yaml file where nothing like volume or other was mentioned.

Now, if we check this container’s details once created, we can see

Here you can see that volume was created by a secret called default token. Which is the secret containing the token for this default service account. This secret is mounted on /var/run/secrets/…………………….serviceaccount inside the pod.

So, if we see the list of files within this location, we can see “token”

We can also view the content within token which actually interacts with kube-api for all of the processes.

Now, assume that we want to change the serviceaccountname for the pod

Here we will choose dashboard-sa as we have already created that:

Keep in mind that, we can’t do this for a container unless it’s part of a deployment. If it’s part of a deployment, the deployment makes the changes automatically.

Once done, now you can see different secretName (dashboard-sa-……..) which was used instead of the default service (default-………) account.

There is also an option to not allow automatic service mounting to a pod

We just need to specify automountServiceAccountToken as false while creating the pods/containers.

Now, from v1.24 and later,

if we paste the secret token we got for the default secret in this website

You can see that there is no expiry date in the payload which is an issue or becomes risky.

So, a new API called TokenRequestAPI was created

A token generated by this TokenRequestAPI is now more secured.

So, now if a pod is created, we can see this

There is no volume created by the default service account. Earlier we had no expiry date but from this change, we have a token generated by TokenRequestAPI by the service account admission controller when the pod is created.

The token is mounted as a projected volume now.

In v1.24,

Once one created a service account, no token gets created.

Whereas earlier, whenever we used to created a service account, a token was created right away.

Now, you have to manually create a token for the service account you created

If you copy paste this token to the JWB website, you can see an expiry time for the token.

And now, if you want to associate your pod with your created service account, add the service account name within

annotations:

kubernetes.io/service-account.name:

After that this service account will create a token for your pod.

But this is not recommended as TokenRequesAPI does that for you and creates a token.

Image security

When we use image: nginx

It understands library/nginx which is at dockerhub

By default the registry is docker.io

To use a private registry,

Login to your registry and then run your image.

But how docker does that for you?

Basically, we first create a secret object (regcred) with a credentials in it. The secret is of type Docker-registry and that’s a built in secret type that was built for storing docker credentials. Here, regcred is the registry server name.

We then specify the file under the image pull secret selection.

Security Contexts

You can assign security credentials for pod or containers.

Basically containers remain within the pod. So, if we we apply rules for a pod, every container automatically gets assign to this rule

Here is a securityContext added for the pod. Here, runAsUser is the user ID.

Again, if we want to assign this for the container only, it will look like this

Network Policies

Assume that we have a web server serving front end to the users, an API server serving backend APIs and a database.

The user sends a request at port 80 for the frontend, the web server sends request at port 5000 at the API server and it sends request at port 3306

Then each of them response back

So, here we have 2 type of traffic.

  1. Ingress: For the web server, Incoming traffic from the user is ingress traffic. For the backend API server, request by the webserver is Ingress. For the database server, the incoming request from Backend API server is ingress.

2)Egress: For the web server, request to the API server is egress traffic. For, backend API server, request to the database server is egress.

So, as a whole:

We had ingress rule at port 80, 5000, 3306 and egress rule at port 5000,3306.

Now, let’s discuss the main topic.

In a cluster, we have lots of nodes and each node carries pods. Pods have containers in them. But here triangle , circle etc means pods for now.

Assuming each have their private IP and it’s a must to let each pod contact one another.

By default the rule is set as “All Allow” for pods. Surely there are services to make the communication.

Now, talking about our earlier example, we learned using the server concept. Instead let’s learn these servers on a pod.

As we know, pods have servers to make sure that every pod contact each other.

What if we don’t want the web pod to communicate with the database pod

To do that, we need network policies to be set.

It’s another object kept in the namespace. We need to link that to one or more pods.

Then define the rules for the network policy

So, we want to allow ingress traffic from the API pod but don’t want to allow any ingress traffic from the web pod.

But how we select the DB pod for the network policy?

By using labels and selector.

We need to use the labels we set for db (for example, it’s db) and put that under the podSelector section,

Now, the rule:

We want to send ingress traffic to port 3306 from API pod, right?

So, we used API pod’s label (api-pod) to allow ingress traffic for that. Add opened port 3306 to get the request.

So, what is the whole yaml file look like?

Note: Here we have isolated Ingress traffic (Traffic request coming from API pod and web server POD) for this database API. (policyTypes: -Ingress). That means Egress traffic(Traffic going to API Pod as a result of it’s request) to API server is unharmed.

Then apply the policy

We need to keep in mind that, we need to pick solutions which supports network policies

Developing Network policies

Assume that we are not letting any pod to access to DB pod except API pod on port 3306

Kubernetes allow all traffic from all pods to all destinations. So, to achieve our goal, we need to block out all connection going in and out the DB pod.

Here we did select the DB Pod, by using the label db (which was used for the DB pod and we used it as a matchLabels)

But have we made a connection from API pod to contact to DB Pod? No! Let’s specify this as Ingress. This way, the API pod can contact DB Pod

Finally we have to specify labels for the API pod and the pod on DB port where API pod will send traffic

Assume that, we have 3 API pod in 3 different namespace. The yaml code we have will give access to all of the API pods which we don’t want.

We only want the pod from prod namespace to connect with DB prod. So, we need to make sure that there is a label for the namespace. Assuming it as “prod”

Also, in-case we need to allow traffic from a backup server which is out of our cluster, we can set the IP

Now, assume instead of the traffic coming from backup server to network policy, we want the traffic to be sent to backup server.

0
Subscribe to my newsletter

Read articles from Md Shahriyar Al Mustakim Mitul directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Md Shahriyar Al Mustakim Mitul
Md Shahriyar Al Mustakim Mitul