Deploy a LDAP server
Helm and Docker resources
The Helm resources are stored on ClusterFactory Git Repository.
The Dockerfile is described in the git repository 389ds/dirsrv.
An Docker image can be pulled with:
docker pull docker.io/389ds/dirsrv:latest
1. Deploy Namespace and AppProject
kubectl apply -f argo/ldap/
2. Secrets and Ingresses
2.a. Editing the environment variables with secrets
Take a look at the README of 389ds/dirsrv.
Some of the environment variables are sensitive:
DS_DM_PASSWORD
: The password of thecn=Directory Manager
user.
We must store these value inside a secret.
- Create a
-secret.yaml.local
file:
apiVersion: v1
kind: Secret
metadata:
name: 389ds-secret
namespace: ldap
stringData:
dm-password: <a password>
- Seal the secret:
cfctl kubeseal
- Apply the SealedSecret:
kubectl apply -f argo/ldap/secrets/389ds-sealed-secret.yaml
You can track 389ds-env-sealed-secret.yaml
in Git, but not the -secret.yaml.local
file.
2.b. Creating an IngressRouteTCP
to expose the LDAP server
You can expose the LDAP using Traefik IngressRouteTCP
.
Create a argo/ldap/ingresses/ingress-route-tcp.yaml
file and add:
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: ldap-ingress-tcp
namespace: ldap
labels:
app.kubernetes.io/name: ldap
app.kubernetes.io/component: ingress-route-tcp
spec:
entryPoints:
- ldap
routes:
- match: HostSNI(`*`)
services:
- name: dirsrv-389ds
port: 3389
---
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: ldaps
namespace: ldap
labels:
app.kubernetes.io/name: ldaps
app.kubernetes.io/component: ingress-route-tcp
spec:
entryPoints:
- ldaps
routes:
- match: HostSNI(`*`)
services:
- name: dirsrv-389ds
namespace: ldap
port: 3636
tls:
passthrough: true
You must open ports 636 and 389 on the load balancer of Traefik by configuring the Traefik values.yaml
:
ports:
ldap:
port: 1389
expose: yes
exposedPort: 389
protocol: TCP
ldaps:
port: 1636
expose: yes
exposedPort: 636
protocol: TCP
Apply:
./scripts/deploy-core
kubectl apply -f argo/ldap/ingresses/ingress-routes-tcp.yaml
2.d. Creating a Certificate
for LDAPS (TLS)
Create a argo/ldap/certificates/ldap.example.com-cert.yaml
file and add:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: ldap.example.com-cert
namespace: ldap
spec:
secretName: ldap.example.com-secret
issuerRef:
name: private-cluster-issuer
kind: ClusterIssuer
commonName: ldap.example.com
subject:
countries: [CH]
localities: [Lonay]
organizationalUnits: []
organizations: [Example Org]
postalCodes: ['1027']
provinces: [Laud]
streetAddresses: [Chemin des Mouettes 1]
duration: 8760h
dnsNames:
- ldap.example.com
- dirsrv-389ds.ldap.svc.cluster.local
privateKey:
size: 4096
algorithm: RSA
Do not use selfsigned-cluster-issuer
as self signed certificates are not accepted by 389ds.
You want your LDAP server to be secure inside and outside the cluster. Therefore, you need to add 2 DNS names:
ldap.example.com
which is used to access to the Ingress Controller which will forward to the LDAP service.dirsrv-389ds.ldap.svc.cluster.local
which is used to access to the LDAP service.
You should edit all of the fields of the certificate, especially the subject
field.
Apply it:
./scripts/deploy-core
kubectl apply -f argo/ldap/certificates/ldap.example.com-cert.yaml
3. Editing 389ds-app.yaml
to use the fork
Change the repoURL
to the URL used to pull the fork. Also add the values-production.yaml
file to customize the values.
source:
# You should have forked this repo. Change the URL to your fork.
repoURL: git@github.com:<your account>/ClusterFactory.git
# You should use your branch too.
targetRevision: HEAD
path: helm/389ds
helm:
releaseName: 389ds
# Create a values file inside your fork and change the values.
valueFiles:
- values-production.yaml
4. Adding custom values to the chart
Read the values.yaml
to see all the default values.
4.a. Create the values file
Create the values file values-production.yaml
inside the helm/389ds/
directory.
4.b. Configure the 389ds
tls:
secretName: ldap.example.com-secret
config:
dmPassword:
secretName: '389ds-secret'
key: 'dm-password'
suffixName: 'dc=example,dc=com'
initChownData:
enabled: true
Edit the suffixName
according to your need. This is the path in LDAP where the organizational units will be stored. For example: ou=people,dc=example,dc=com
.
4.c. Mount the volume
# ...
persistence:
storageClassName: 'dynamic-nfs'
4. Deploy the app
Commit and push:
git add .
git commit -m "Added 389ds service"
git push
And deploy the Argo CD application:
kubectl apply -f argo/ldap/apps/389ds-app.yaml
If the Ingress is configured, the LDAP server should be available on the IP specified by MetalLB.
The deployment of 389ds might be slow. Watch the logs and look for INFO: 389-ds-container started.
which indicates a successful deployment.
If the server is crashing, it may be caused by the permissions inside the NFS. Check the content inside NFS, the owner should be 499:499
.
5. Post deployment settings
After deploying the LDAP server, the database is empty.
kubectl exec
inside the container:
kubectl exec -i -t -n ldap dirsrv-389ds-0 -c dirsrv-389ds -- sh -c "clear; (bash || ash || sh)"
You can also use Lens to open a shell inside the container.
To initialize the database, run:
dsconf localhost backend create --suffix dc=example,dc=com --be-name example_backend
dsidm localhost initialise
Adapt the suffix based on the suffixName
in the values file. You can also change the backend name example_backend
.
Based on what you want, you can add uniqueness
attributes to some fields:
# Unique mail
dsconf localhost plugin attr-uniq add "mail attribute uniqueness" --attr-name mail --subtree "opu=people,dc=example,dc=com"
# Unique uid
dsconf localhost plugin attr-uniq add "uid attribute uniqueness" --attr-name uid --subtree "ou=people,dc=example,dc=com"
# Unique uid number
dsconf localhost plugin attr-uniq add "uidNumber attribute uniqueness" --attr-name uidNumber --subtree "dc=example,dc=com"
# Unique gid number
dsconf localhost plugin attr-uniq add "gidNumber attribute uniqueness" --attr-name gidNumber --subtree "ou=groups,dc=example,dc=com"
# Enable the plugins
dsconf localhost plugin attr-uniq enable "mail attribute uniqueness"
dsconf localhost plugin attr-uniq enable "uid attribute uniqueness"
dsconf localhost plugin attr-uniq enable "uidNumber attribute uniqueness"
dsconf localhost plugin attr-uniq enable "gidNumber attribute uniqueness"
You may also want uid/gid number auto assignment:
dsconf localhost plugin dna config "UID and GID numbers" add \
--type gidNumber uidNumber \
--filter "(|(objectclass=posixAccount)(objectclass=posixGroup))" \
--scope dc=example,dc=run\
--next-value 1601 \
--magic-regen -1
dsconf localhost plugin dna enable
Change next-value
to the wishing starting uid/gid number. Select a magic value which will indicates to use a new value for the user.
Example:
dsidm -b "dc=example,dc=com" localhost user create \
--uid example-user \
--cn example-user \
--displayName example-user \
--homeDirectory "/dev/shm" \
--uidNumber -1 \
--gidNumber 1600
The created user will have 1601 as UID and 1600 as GID.
If you want to have a seperate DNA plugin for gidNumber
and uidNumber
.
dsconf localhost plugin dna config "UID numbers" add \
--type uidNumber \
--filter "(objectclass=posixAccount)" \
--scope ou=people,dc=example,dc=run\
--next-value 1601 \
--magic-regen -1
dsconf localhost plugin dna config "GID numbers" add \
--type gidNumber \
--filter "(objectclass=posixGroup)" \
--scope ou=groups,dc=example,dc=run\
--next-value 1601 \
--magic-regen -1
dsconf localhost plugin dna enable
Full documentation about distributed numeric assignment here.
Restart the server after the changes:
kubectl delete pod -n ldap dirsrv-389ds-0
The database may have been destroyed because of the plugin, kubectl exec
in the container and run again:
dsconf localhost backend create --suffix dc=example,dc=com --be-name example_backend
dsidm localhost initialise
Snippets
Add user:
dsidm -b "dc=example,dc=com" localhost user create \
--uid example-user \
--cn example-user \
--displayName example-user \
--homeDirectory "/dev/shm" \
--uidNumber -1 \
--gidNumber 1600
Create group:
dsidm -b "dc=example,dc=com" localhost group create \
--cn cluster-users
Add posixGroup property and gidNumber
dsidm -b "dc=example,dc=com" localhost group modify cluster-users \
"add:objectClass:posixGroup" \
"add:gidNumber:1600"
Add user to the group
dsidm -b "dc=example,dc=com" localhost group add_member \
cluster-users \
uid=example-user,ou=people,dc=example,dc=com
Add a public ssh key to a user
dsidm -b "dc=example,dc=com" localhost user modify \
example-user add:nsSshPublicKey:"...."
Set user password
dsidm -b "dc=example,dc=com" localhost user modify \
example-user add:userPassword:"...."
Adding indexes
dsconf localhost backend index add --index-type eq --attr uidNumber example_backend
dsconf localhost backend index add --index-type eq --attr gidNumber example_backend
dsconf localhost backend index add --index-type eq --attr nsSshPublicKey example_backend
dsconf localhost backend index reindex example_backend