Kubernetes CRL support with the front-proxy-client and Haproxy

Kubernetes CRL support with the front-proxy-client and Haproxy

  • Written by
    Alex Boonstra
  • Published on

Kubernetes CRL support with the front-proxy-client and Haproxy

Kubernetes clusters at OSSO are usually setup using a PKI infrastructure from which we create client certificates for users as well. Since users can sometimes be a little but careless, there is a growing need for CRL support of those certificates. At the moment of writing there is no way in Kubernetes to check if certificates are revoked by using a Certificate Revocation List (CRL) or Online Certificate Status Protocol (OCSP). For infrastructures relying heavily on client-certificates this causes a problem whenever you want to revoke access.

One way to solve this is by re configuring RBAC rules and making sure the user (CN) and group (O) have no role bindings anymore, but for use cases where you want to keep using these user/group names, this wont work.

CRL by support using Haproxy

There are tools available which do support CRL and one of these is Haproxy, a tool we already use in our setups. Haproxy can terminate ssl traffic and verify the validity of the certificates by checking this against a ca-file and crl-file. On verification failure the handshake is aborted.

Using the Kubernetes Aggregation Layer

Since we cannot reuse the client certificate there is no way for the kubernetes apiserver to know who we are once we forward the request, meaning we will get http401 responses. To work around this we can use the Kubernetes Aggregation Layer.

We use the front-proxy-client client-certificate and Haproxy when forwarding the requests to kubernetes with the user (CN) and group (O) extracted from the supplied client certificates and by putting these values in request headers (X-Remote-User/Group) before forwarding the request to the Kubernetes apiserver. Kubernetes is able to authenticate and authorize the request using RBAC.

The way this works is that Kubernetes will be able to use the request headers (X-Remote-User/Group) when the request is authenticated using a client certificate signed by the Aggregation Layer CA. It will verify the front-proxy-client client-certificate before trusting usernames in headers. The apiserver uses a client-ca file and a list of allowed-names for this.

Diagram

files/snippets:

Certificates

/etc/kubernetes/pki/ca.crt

Hold the certificate authority chain to verify if the client certificate is signed by one of these authorities.

/etc/kubernetes/pki/apiserver.pem

Holds the privkey and cert of the Kubernetes apiserver. signed by the CA defined in /etc/kubernetes/pki/ca.crt.

/etc/kubernetes/pki/crl.pem

File from which to load certificate revocation lists used to verify client’s certificate. Take note that you need to provide a certificate revocation list for every certificate of your certificate authority chain.

/etc/kubernetes/pki/front-proxy-client.pem

Holds the privkey and cert for the front-proxy-client, signed by the CA defined in /etc/kubernetes/pki/front-proxy-ca.crt. Required to trust headers in forwarded requests from haproxy to the Kubernetes apiserver.

kube-apiserver

Options adjusted/required:

--bind-address=127.0.0.1
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-allowed-names=front-proxy-client
--requestheader-username-headers=X-Remote-User
--requestheader-group-headers=X-Remote-Group

We make sure the Kubernetes apiserver is bound to 127.0.0.1, this to prevent bypassing the authenticating proxy and in turn bypassing the CRL verification.

requestheader-allowed-names is used to limit the CN to the value front-proxy-client and not allowing other CN’s.

haproxy

/etc/haproxy/haproxy.cfg

listen api-in
    bind 10.66.66.66:6443 ssl crt /etc/kubernetes/pki/apiserver.crt verify optional ca-file /etc/kubernetes/pki/ca-int.crt ca-verify-file /etc/kubernetes/pki/ca.crt crl-file /etc/kubernetes/pki/crl.pem
    mode http
    http-request set-header X-Remote-User %{+Q}[ssl_c_s_dn(CN)]
    http-request set-header X-Remote-Group %{+Q}[ssl_c_s_dn(O)]
    default-server check ssl verify required ca-file /etc/kubernetes/pki/ca.crt crt /etc/kubernetes/pki/front-proxy-client.crt
    server 127.0.0.1 127.0.0.1:6443

This is the main entry point for all Kubernetes apiserver traffic. This means clients, but also applications like kubelet and kube-proxy will use this endpoint for accessing the api.

We run the Kubernetes apiserver on 127.0.0.1:6443 and the haproxy on its own host ip (10.66.66.66) on port 6443, this way we can use haproxy as drop in replacement for api access.

Results

Using a revoked certificate

$ kubectl get nodes
Unable to connect to the server: remote error: tls: revoked certificate

Using a valid certificate

$ kubectl get nodes
NAME                        STATUS   ROLES   AGE   VERSION
node1.jedi.dostno.systems   Ready    zl      44d   v1.24.8

Known backdoor(s)

This prevents almost all possibilities to directly connect to the apiserver from, lets say, a kubernetes pod. All traffic should end up at haproxy running on the control plane nodes.

One exception you can pinpoint right away is when you spawn a pod with host networking. This opens up the networking from the host itself. To prevent this you could implement some policies. Kyverno is quite a useful tools for this with security policies available.


Back to overview Older post: recap 2022 - part 2 / stable diffusion