OpenLDAP Multi-Master Replication is for high availability, not load balancing. If a split-brain is possible, consider the mirror mode architecture described in the OpenLDAP Administrator’s Guide. A split-brain is where two or more nodes of a cluster are operating independently, which can cause the cluster data to become corrupt or out of sync. If you have a few nodes in the same data center on the same subnet, this is unlikely. Nodes on different subnets or in different data centers are more likely to split-brain.
Tasks
To setup multi-master replication, you must perform the following tasks:
- Verify NTP is functioning on each node.
- Decide which databases are going to be replicated.
- Determine the need for TLS.
- If using TLS, obtain certificate authority (CA) certificates
- Verify replicated databases’ root DN passwords.
- Verify database operating system (OS) directories.
- Add server IDs to the configuration directory.
- If required, load the replication module.
- Apply the replication overlay.
- Configure the replication providers.
- Load the configuration to the other servers.
- Test the cluster.
Configuration Process
The process of configuring the nodes will depend on if you are replicating the configuration database and if the nodes are going to have any differences.
If you are replicating the configuration database, configure one node, back up its configuration directory, and restore the backup to the other nodes. See my backup and restore guide if you need help.
If the nodes are going to be different, you will need to use ldapadd and ldapmodify to make the changes to each node individually.
At the end of the guide, I have written instructions for how to handle the 2 scenarios: replicating the configuration directory and replicating only data directories. Both scenarios require configuring one node to completion. Once you reach this point, follow ONLY the relevant subsection.
Verify NTP
Ensure each cluster node has an NTP implementation that is functioning. If you have never done this, most Linux distributions have a working default NTP configuration. In most cases, all you have to do is install the relevant package and start the daemon. Consult your operating system’s (OS) documentation for help. Once you have NTP setup, use the date command to verify the clocks are synchronized.
Choosing Databases to Replicate
If you used my guides for installation, or if all of your nodes have the same OS and you installed from the repository, you should be able to replicate the configuration directory. If your nodes have different OSs and you installed from the repositories, replicating the configuration directory could be tricky. You will probably need to specify multiple module paths. If your environment requires TLS, you will probably have to specify your allowed ciphers individually instead of using the convenient groups most libraries provide. If you decide not to replicate the configuration database, you will still be able to replicate the rest of your databases, but you will have to make configuration changes to each node instead of just one.
If you are using an access log database, the Administrator’s Guide recommends against replicating it. I agree with the documentation that it generally shouldn’t be done. I have a system at my day job where it is being replicated; it is a small system where the convenience outweighs the disadvantages.
TLS Certificate Authority Certificates
If you are using TLS to secure the connection between your servers, you will need to supply the CA certificates used to sign the server certificates to each node. You can take one of two approaches. The first is to put all CA certificates in the file or database referenced by olcTLSCACertificateFile or olcTLSCACertificatePath, the second is to use the tls_cacert property of olcSyncrepl. The examples I give will use the first method. If you aren’t going to use TLS, skip the rest of this section.
If you are using an internal CA, ask your CA administrator for the CA certificate in PEM format. If you haven’t already, add this certificate to each node. I use /pki/cacerts.pem to store internal CA certificates and self-signed certificates. If you are following the examples, append it to the afore mentioned file.
If you are using self-signed certificates, add each certificate to all nodes. If you are following the examples, /pki/cacerts.pem on all nodes should have the server certificates for every node in your cluster.
Verify Root DN Passwords
Verify the root DN password of each database you are replicating. If you need to set or reset any of them, use slappasswd to generate a password hash. Then use ldapmodify to replace or add the olcRootPW attribute. The following command will output a list of databases, their root DNs, and if they have passwords.
[root@ldap back-meta]# slapcat -n 0 -a '(olcRootDN=*)' | egrep '^dn:|olcRootDN|olcRootPW'
dn: olcDatabase={0}config,cn=config
olcRootDN: cn=config
dn: olcDatabase={1}mdb,cn=config
olcRootDN: cn=admin,dc=tylersguides,dc=com
olcRootPW:: e1NTSEE1MTJ9SEhBVVI3WHBYL3M4d2RQdHdTUDE1R0Z2dHlyVDR2SEZvV2Y3Vk5z
Verify you can bind to the server with each root DN.
Working password:
[root@ldap libexec]# ldapsearch -x -W -H ldapi:/// -D "cn=admin,dc=tylersguides,dc=com"
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 32 No such object
# numResponses: 1
Bad password:
[root@ldap libexec]# ldapsearch -x -W -H ldapi:/// -D "cn=admin,dc=tylersguides,dc=com"
Enter LDAP Password:
ldap_bind: Invalid credentials (49)
As you probably noticed, there is no root DN password set for the configuration directory. Let’s set one:
[root@ldap libexec]# slappasswd -h '{SSHA512}' -o module-load=pw-sha2.la -o module-path=/opt/openldap-current/libexec/openldap
New password:
Re-enter new password:
{SSHA512}ZmCZs1SC7s4oKFqIDMY65Y6FOZHlVhc12TApeInlTd165H+FyA6Q9t4m+74UTYWBx5djAleE/g093FA41y3lfHVF/qgwkSnH
[root@ldap libexec]# ldapmodify -Y EXTERNAL -H ldapi:///
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA512}ZmCZs1SC7s4oKFqIDMY65Y6FOZHlVhc12TApeInlTd165H+FyA6Q9t4m+74UTYWBx5djAleE/g093FA41y3lfHVF/qgwkSnH
modifying entry "olcDatabase={0}config,cn=config"
[root@ldap libexec]#
Replace the path in the example with your module path. If you aren’t sure what it is, the table should help.
OS | PATH |
---|---|
CentOS 7 | /usr/lib64/openldap |
openSUSE | /usr/lib64/openldap |
Debian (Stretch) | /usr/lib/ldap |
FreeBSD | /usr/local/libexec/openldap |
Source (Tyler’s Guides) | /opt/openldap-current/libexec/openldap |
Source (default) | /usr/local/libexec/openldap |
Verify OS Database Directories
Make sure the database directories on the OS exist. Ensure they are readable and writable by the OpenLDAP user. They are defined by the olcDbDirectory attribute of your database definition entries. The following command will output a list of them:
root@debian:~# slapcat -n 0 | grep olcDbDirectory
olcDbDirectory: /ldapdata
root@debian:~# ls -ld /ldapdata
drwxr-x--- 2 openldap openldap 4096 Sep 10 14:42 /ldapdata
root@debian:~#
If you are replicating the configuration database, this will be the same on each host in the cluster. If you aren’t, then you can set the olcDbDirectory to a location of your choosing on each host. I like to keep things simple and make them the same.
Add Server IDs
Each cluster node needs a server ID. The server ID is an integer assigned to each node along with its respective URI. There are 2 configuration changes you will need to make to assign the server IDs. The cn=config object needs an olcServerID attribute for each node, and each node’s slapd startup configuration needs to be updated with the URI its olcServerID entry.
Use ldapmodify to add server IDs to your configuration entry. Add an olcServerID attribute for each server in your cluster. If you aren’t using TLS to secure your connections, use ldap:// instead of ldaps://. Here is an example LDIF:
dn: cn=config
changetype: modify
add: olcServerID
olcServerID: 1 ldaps://ldap1.tylersguides.com/
olcServerID: 2 ldaps://ldap2.tylersguides.com/
Modify Server Startup File.
The OpenLDAP server will refuse to start if one of the server ID URLs is not specified in the startup command. This how each server in a cluster determines its server ID. Modify the startup file on each server with the URLs you are using for their respective server IDs.
Here is an example startup configuration file for CentOS 7:
SLAPD_OPTIONS="-F /opt/openldap-current/etc/openldap/slapd.d"
SLAPD_URLS="ldapi:/// ldap:/// ldaps://ldap1.tylersguides.com/"
Here are the locations of startup configuration files on various systems:
OS | STARTUP FILE |
---|---|
CentOS 7 | /etc/sysconfig/slapd |
openSUSE | /etc/sysconfig/openldap |
Debian (Stretch) | /etc/default/slapd |
Source (Default) | N/A |
Source (Tyler’s Guides) CentOS 7 | /etc/sysconfig/slapd-current |
Source (Tyler’s Guides) Debian Stretch | /etc/default/slapd-current |
Source (Tyler’s Guides) openSUSE | /etc/sysconfig/slapd-current |
Load Synchronization Overlay Module
This step should be skipped if the compile time default is used. If you built OpenLDAP from source and you did NOT pass –enable-overlays=mod or –enable-syncprov=mod, AND you did NOT pass –enable-modules to the configure script, skip this step. Most Linux distributions deviate from the default and ship the synchronization overlay as a module. My installation guides build it as a module. Don’t worry if you erroneously skip this step; the worst that can happen is you will get an error message when you try to apply the overlay. In that case, you can come back to this step and load the module. The following table should clarify whether you need to do this step or not. If you still aren’t sure, check your OpenLDAP module directory for the presence of the file syncprov.la.
Installation Method | SKIP |
---|---|
FreeBSD Ports | YES |
FreeBSD Package | YES |
Source using Tyler’s Guides | NO |
Source | PROBABLY |
Linux Package Repository | NO |
See if you have any modules already loaded:
[root@ldap openldap]# slapcat -n 0 | grep olcModuleLoad
olcModuleLoad: {0}back_mdb.la
olcModuleLoad: {1}pw-sha2.la
[root@ldap openldap]#
If you have any output from the command above, use ldapmodify to load the module:
[root@ldap ~]# ldapmodify -Q -Y EXTERNAL -H ldapi:///
dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: syncprov.la
modifying entry "cn=module{0},cn=config"
Otherwise, use ldapadd. Replace the highlighted portion with what is relevant to your environment. If you aren’t sure where your modules are located, consult the table below.
[root@ldap ~]# ldapadd -Y EXTERNAL -Q -H ldapi:///
dn: cn=module,cn=config
cn: module
objectClass: olcModuleList
olcModulePath: /opt/openldap-current/libexec/openldap
olcModuleLoad: syncprov.la
adding new entry "cn=module,cn=config"
OS | PATH |
---|---|
CentOS 7 | /usr/lib64/openldap |
openSUSE | /usr/lib64/openldap |
Debian (Stretch) | /usr/lib/ldap |
Source (Default) | /usr/local/libexec/openldap |
Source (Tyler’s Guides) | /opt/openldap-current/libexec/openldap |
Apply The Replication Overlay
The synchronization overlay must be applied on each database definition entry. Without it, OpenLDAP won’t recognize the replication related attributes.
The default configuration should work fine in most cases. Consult the slapo-syncprov man page for an explanation of the configuration options. The man page uses the configuration file option names, so I have included a table with configuration directory equivalents.
Configuration File | Configuration Directory |
---|---|
syncprov-checkpoint | olcSpCheckpoint |
syncprov-sessionlog | olcSpSessionlog |
syncprov-nopresent | olcSpNoPresent |
syncprov-reloadhint | olcSpReloadHint |
The following LDIF will apply the overlay to the configuration directory and a single mdb back end. You may need to replace the highlighted portions with what is relevant to your system.
dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
dn: olcOverlay=syncprov,olcDatabase={1}mdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
Once you have created an LDIF for your overlays, add it to your configuration database with ldapadd:
ldapadd -Y EXTERNAL -Q -H ldapi:/// -f sync_overlay.ldif
Configure The Databases For Replication
Now add the replication attributes to your database entries. See the table below the example for an explanation of the parameters. There are more parameters available than what you see in the example. See the slapd-config man page for more information.
Example LDIF:
dn: olcDatabase={0}config,cn=config
changetype:modify
add: olcSyncrepl
olcSyncrepl: rid=001
provider=ldaps://ldap1.tylersguides.com
binddn="cn=config"
bindmethod=simple
credentials=password
searchbase="cn=config"
type=refreshAndPersist
retry="5 5 300 +"
timeout=1
olcSyncrepl: rid=002
provider=ldaps://ldap2.tylersguides.com
binddn="cn=config"
bindmethod=simple
credentials=password
searchbase="cn=config"
type=refreshAndPersist
retry="5 5 300 +"
timeout=1
-
add: olcMirrorMode
olcMirrorMode: TRUE
dn: olcDatabase={1}mdb,cn=config
changetype:modify
add: olcSyncrepl
olcSyncrepl: rid=003
provider=ldaps://ldap1.tylersguides.com
binddn="cn=admin,dc=tylersguides,dc=com"
bindmethod=simple
credentials=password
searchbase="dc=tylersguides,dc=com"
type=refreshAndPersist
retry="5 5 300 +"
timeout=1
olcSyncrepl: rid=004
provider=ldaps://ldap2.tylersguides.com
binddn="cn=admin,dc=tylersguides,dc=com"
bindmethod=simple
credentials=password
searchbase="dc=tylersguides,dc=com"
type=refreshAndPersist
retry="5 5 300 +"
timeout=1
-
add: olcMirrorMode
olcMirrorMode: TRUE
Parameter | Explanation |
---|---|
rid | Uniquely identifies the replication consumer. |
provider | The URI of the server being replicated. |
bindmethod | The authentication method. If desired, SASL can be used. |
binddn | The DN the consumer uses to login to the provider. |
credentials | The password the bind DN should used to login to the provider. |
searchbase | The part of the DIT that should be replicated. This is usually the olcSuffix of the back end being replicated. |
type | The method used to query the provider for changes. refreshAndPersist keeps a session open and waits for changes. refreshOnly periodically queries the master for changes. |
retry | How long to wait before retrying when an error occurs. It has 4 fields, respectively:
A + in a retry count field means to keep trying indefinitely. In the examples, the servers will try again every 5 seconds. If it has tried 5 times, it will try again every 300 seconds until it replicates without any errors. |
timeout | How long the consumer should wait for the master to respond when logging in to check for changes. |
olcMirrorMode | This is what enables the Multi-Master behavior. Without it, changes to the replicated directories will be refused. If you are replicating the configuration directory, DO NOT FORGET THIS! |
Once you are satisfied with your replication configuration, apply the LDIF:
ldapmodify -Y EXTERNAL -H ldapi:/// -Q -f replication_config.ldif
Now you are done configuring the first server.
Configuring The Rest of the Servers
Follow ONLY the sub section relevant to your situation.
Replicated Configuration Database
If you are replicating the configuration database, use slapcat to create a backup of the configuration directory. Then copy this file to the other nodes in your cluster and restore it with slapadd. If you need help, see my backup and restoration guide.
NOT Replicated Configuration Database
Follow the same steps you used to configure the first node on the rest of the nodes. If you are using different OSs, different database directories, or installed OpenLDAP differently, keep these differences in mind.
Restart Servers
Restart each server. If any of them fail to start, use slapcat to verify the olcServerID and TLS configuration matches your startup file and certificate paths, respectively.
Test Your Cluster
Test your cluster by adding, changing, or removing an entry from a replicated database. See my guide on managing OpenLDAP for assistance.