Skip to main content

Deploy a CVMFS Stratum 1

image-20220509180348675

Let's assume we plan to replicate http://cvmfs.example.com/cvmfs/repo.example.com.

warning

There is an issue with the Cgroups V2, a feature in recent linux kernels.

The issue happens when a container image uses SystemD as the init system.

When using a container image with SystemD, /sys/fs/cgroup must be mounted on the container. However, with Cgroups v2, the structure of this directory changed.

Therefore, you MUST rollback to Cgroups v1 until SystemD can run with Cgroups v2. To rollback, add systemd.unified_cgroup_hierarchy=0 to the kernel cmdline parameter.

Helm and Docker resources

The Helm resources are stored on ClusterFactory Git Repository.

The Dockerfile is described in the git repository SquareFactory/cvmfs-server-docker.

A Docker image can be pulled with:

docker pull ghcr.io/squarefactory/cvmfs-server:latest

1. Deploy Namespace and AppProject

user@local:/ClusterFactory
kubectl apply -f argo/cvmfs/

2. Persistent Volumes, Secrets and PVC

2.a. Write the CVMFS public key

Create a SealedSecret which contains the keys of the repositories:

  1. Create a -secret.yaml.local file:
argo/cvmfs/secrets/keys-secret.yaml.local
apiVersion: v1
kind: Secret
metadata:
name: keys-secret
namespace: cvmfs
type: Opaque
stringData:
repo.example.com.pub: |
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
  1. Seal the secret:
user@local:/ClusterFactory
cfctl kubeseal
  1. Apply the SealedSecret:
user@local:/ClusterFactory
kubectl apply -f argo/cvmfs/secrets/keys-sealed-secret.yaml

2.b Deploy a PersistentVolume or StorageClass

While we could use NFS to as persistent storage for the replica, let's deploy a local-path-provisioner.

Basically, local-path-provisioner creates the /opt/local-path-provisioner directory on the nodes. It allocates dynamically a volume in that directory using a StorageClass.

To deploy the provisioner:

user@local:/ClusterFactory
kubectl apply -f argo/local-path-storage/apps/local-path-storage-app.yaml

The StorageClass local-path should be deployed.

3. Editing cvmfs-server-app.yaml to use the fork

Change the repoURL to the URL used to pull the fork. Also add the values-production.yaml file to customize the values.

argo.example/cvmfs/apps/cvmfs-server-app.yaml > spec > source
source:
# You should have forked this repo. Change the URL to your fork.
repoURL: git@github.com:<your account>/ClusterFactory.git
# You should use your branch too.
targetRevision: HEAD
path: helm/cvmfs-server
helm:
releaseName: cvmfs-server

# Create a values file inside your fork and change the values.
valueFiles:
- values-production.yaml

4. Adding custom values to the chart

tip

Read the values.yaml to see all the default values.

4.a. Create the values file

Create the values file values-production.yaml inside the helm/cvmfs-server/ directory.

4.b. Select the nodes

Because we are using local-path, you should select the nodes hosting the volumes.

helm/cvmfs-server/values-production.yaml
nodeSelector:
kubernetes.io/hostname: my-node

4.c. Mount the keys

helm/cvmfs-server/values-production.yaml
# ...
volumeMounts:
- name: keys
mountPath: /etc/cvmfs/keys/cvmfs.example.com/repo.example.com.pub
subPath: repo.example.com.pub
readOnly: true

volumes:
- name: keys
secret:
secretName: keys-secret
defaultMode: 256

state:
storageClassName: 'local-path'

storage:
storageClassName: 'local-path'

4.d. Add the replicas

helm/cvmfs-server/values-production.yaml
# ...
config:
replicas:
- name: repo.example.com
url: http://cvmfs.example.com/cvmfs/repo.example.com
keys: /etc/cvmfs/keys/cvmfs.example.com/repo.example.com.pub
options: '-o root'

Make sure the option -o root is present to avoid a deadlock.

-o root indicates the owner of the repository.

The options field is the arguments passed to cvmfs_server add-replica.

4.e. (Optional) Expose the application to the external network

If you want to expose your stratum 1 server, add these fields to the values:

helm/cvmfs-server/values-production.yaml
# ...
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: selfsigned-cluster-issuer
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: 'true'

ingressClass: traefik

hosts:
- cvmfs.example.com

tls:
- secretName: cvmfs.example.com-secret
hosts:
- cvmfs.example.com

The service is already enabled.

In case you don't know how to use Ingress with cert-manager and Traefik. Use the annotations traefik.ingress.kubernetes.io/router.entrypoints and traefik.ingress.kubernetes.io/router.tls to indicates the port used by Traefik.

The cfctl.yaml indicates that the entrypoints websecure is the port 443.

More about Traefik with Kubernetes Ingresses in their documentation.

Use the annotation cert-manager.io/cluster-issuer to indicates the certificate issuer and specify the generated certificate secret name in the tls[].secretName field. cert-manager will automatically search or generate the TLS certificates.

More about cert-manager in their documentation.

5. Deploy the app

Commit and push:

user@local:/ClusterFactory
git add .
git commit -m "Added CVMFS server"
git push

And deploy the Argo CD application:

user@local:/ClusterFactory
kubectl apply -f argo/provisioning/apps/cvmfs-server-app.yaml

If the Ingress is enabled and configured, the CVMFS server should be available on the IP specified by MetalLB. Configure your DNS so it redirects to this IP.