The purpose of this tutorial is to create by yourself a Unifi cluster with high availability: at the time of writing these lines, in the Ubiquiti roadmap, this does not exist. However, this is a real gap because there are many situations where having a Unifi server in high availability: captive portal, Radius authentication, service continuity, etc…. Unifi being based on the MongoDB database, we will set up this cluster manually via replication between two MongoDB instances.
The following tutorial is based on the “plon” tutorial, available here: https://medium.com/@plon/how-i-made-my-unifi-controller-high-available-aa07df1d19c6. I decided to complete this tutorial in order to have a step by step method to set up a cluster.
In other way, the MongoDB configuration used here is taken from the corresponding article on Loïc Guillois‘ blog (in French): http://www.loicguillois.fr/mettre-en-place-le-sharding-et-le-failover-avec-mongodb
The tutorial was created under Ubuntu 16.04, and it has also been validated under Debian 9.0 and Ubuntu 18.04.
The environment in which we will work is as follows:
- Two dedicated servers, the Unifi1 server of IP X.X.X.X and the Unifi2 server of IP Y.Y.Y.Y.
- A Failover IP Z.Z.Z.Z which can be assigned to one of the two servers as required.
IPs are Public IPs, accessible from the Internet.
We also have three DNS records, on the domain of your choice. Here for the example, yourdomain.com :
- unifi1.yourdomain.com pointing to X.X.X.X,
- unifi2.yourdomain.com pointing to Y.Y.Y.Y,
- unifi.yourdomain.com pointing to Z.Z.Z.Z
Both servers are hosted by Dedi-Online a french hosting platform, as well as the Failover IP. It is quite possible to host it on any other platform offering servers and Failover IPs, you just need to adapt the tutorial to your configuration ! The infrastructure can also be deployed on a local network, however for the sake of the tutorial we will use publicly accessible dedicated servers here to make our cluster available on the Internet.
The tutorial requires some skills on Unifi servers and Unix systems : it is possible to follow the tutorial without knowing anything about it, but you may be lost very quickly. I recommend that you have a knowledge base in these areas before you tackle this tutorial 🙂
The infrastructure we are going to deploy can be summarized as follows :

On paper, the architecture is quite simple: a replication between two MongoDB instances on two different servers, each with a Unifi Web interface connected to this MongoDB cluster. A Failover IP then allows access to the Unifi service, the aim being to work on one or the other of the servers in a transparent way and with a high availability. Let’s go for practice!
1. Unifi Installation
First step, we will have to install Unifi and MongoDB on both servers. To be able to install these packages as easily as possible, Glenn R. offers on the Ubiquiti forum automatic installation scripts that perform the installation from A to Z, without any headache! We will therefore download it and run it on our two servers in order to install Unifi and all its dependencies, without effort and especially without conflicts!
Note that here the script executed is the version for Ubuntu 16.04, version of Ubuntu chosen for the tutorial, however other versions are available on the forum as well as the commands related to the installation: https://community.ubnt.com/t5/UniFi-Wireless/UniFi-Installation-Scripts-Works-on-Ubuntu-18-04-and-16-04/td-p/2375150.
At the time of writing this tutorial, the version of Unifi stable is 5.9.29. In the same way, adapt the commands to the current version.
On both servers, the following commands are issued:
root@unifi1:~# apt-get install ca-certificates -y
root@unifi1:~# wget https://get.glennr.nl/unifi/5.9.29/U1604/unifi-5.9.29.sh; chmod +x unifi-5.9.29.sh
root@unifi1:~# ./unifi-5.9.29.sh
root@unifi2:~# apt-get install ca-certificates -y
root@unifi2:~# wget https://get.glennr.nl/unifi/5.9.29/U1604/unifi-5.9.29.sh; chmod +x unifi-5.9.29.sh
root@unifi2:~# ./unifi-5.9.29.sh
The script runs automatically, and without any action on your part. Except for a request, or you must select No for the question “Would you like to update the controller version when running the following command?“
This question is used to indicate if you want Unifi to be updated when you update system packages. To avoid future incompatibility with our installation, it is preferable to make future Unifi updates manually for the cluster.
2. CREATION OF THE MONGODB SERVICE
Unifi starts a MongoDB instance automatically when the service is launched, at startup. We will create an instance of MongoDB “external” to the Unifi service so that it can be started with the system, and configured for replication.
On both servers, we create the mongodb service with the necessary parameters to launch it at startup:
root@unifi1:~# nano /etc/systemd/system/mongodb.service
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
[Install]
WantedBy=multi-user.target
root@unifi2:~# nano /etc/systemd/system/mongodb.service
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target
[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf
[Install]
WantedBy=multi-user.target
On both servers, the MongoDB working folder is then created, which does not necessarily exist after the installation of Unifi:
root@unifi1:~# mkdir /var/run/mongodb/
root@unifi2:~# mkdir /var/run/mongodb/
On both servers, we will then modify the MongoDB configuration file. The arguments to be modified are as follows:
root@unifi1:~# nano /etc/mongod.conf
root@unifi2:~# nano /etc/mongod.conf
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
#replication:
replication:
replSetName: "HA"
BindIP must be set to “0.0.0.0” for MongoDB to listen on all ports and not just locally, which is necessary for each MongoDB to contact the other server.
Replication must be enabled, and replSetName must be configured. This is the replication identifier, you can put any name you want, you will have to remember it for later. Here we will call our MongoDB cluster “HA“.
Then, on both servers, the mongodb service is activated at startup and launched:
root@unifi1:~# systemctl enable mongodb
root@unifi1:~# service mongod start
root@unifi2:~# systemctl enable mongodb
root@unifi2:~# service mongod start
You can check that everything is working well via the “service mongod status” command
root@unifi1:~# service mongod status
root@unifi2:~# service mongod status
3. IMPLEMENTATION OF REPLICATION for MongoDB
Our MongoDB service is now functional on both servers. To be able to set up replication between our two previously configured MongoDBs, simply run the “mongo” command on the primary server:
root@unifi1:~# mongo
MongoDB shell version v3.4.18
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.18
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-11-20T14:09:14.685+0100 I STORAGE [initandlisten]
2018-11-20T14:09:14.685+0100 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-11-20T14:09:14.685+0100 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-11-20T14:09:15.262+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.262+0100 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-11-20T14:09:15.262+0100 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-11-20T14:09:15.262+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.263+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.263+0100 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-11-20T14:09:15.263+0100 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-11-20T14:09:15.263+0100 I CONTROL [initandlisten]
>
Then execute the following two commands in the MongoDB console. Be careful to modify X.X.X.X and Y.Y.Y.Y by the IP of your primary and secondary server. Same, replace “HA” by the replSetName defined earlier, if you modified it:
cfg = {
_id : "HA",
members : [
{ _id : 0, host : "X.X.X.X:27017"},
{ _id : 1, host : "Y.Y.Y.Y:27017"},
] }
rs.initiate(cfg)
To check if replication is active, you can use the command “db.isMaster()” in the Mongo console, on the primary server or on the secondary server:
HA:OTHER> db.isMaster()
{
"hosts" : [
"X.X.X.X:27017",
"Y.Y.Y.Y:27017"
],
"setName" : "HA",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "X.X.X.X:27017",
"me" : "Y.Y.Y.Y:27017",
"electionId" : ObjectId("7fffffff0000000000000001"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1542719553, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2018-11-20T13:12:33Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2018-11-20T13:12:43.390Z"),
"maxWireVersion" : 5,
"minWireVersion" : 0,
"readOnly" : false,
"ok" : 1
}
HA:PRIMARY> exit
bye
root@unifi1:~#
root@unifi2:~# mongo
MongoDB shell version v3.4.18
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.18
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2018-11-20T14:09:15.284+0100 I STORAGE [initandlisten]
2018-11-20T14:09:15.284+0100 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-11-20T14:09:15.284+0100 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten]
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-11-20T14:09:15.861+0100 I CONTROL [initandlisten]
HA:SECONDARY> exit
bye
root@unifi2:~#
The “prompt” of your MongoDB console indicates the status of your database, Primary or Secondary. Our replication is now functional.
4. IMPLEMENTATION OF THE UNIFI CLUSTER
Now that we have correctly configured MongoDB on both our servers and we have a replicated database, we must tell Unifi to work on this new database rather than the instance on which it usually works. A configuration file is available, which is described in the Ubiquiti documentation: https://help.ubnt.com/hc/en-us/articles/205202580-UniFi-system-properties-File-Explanation
On both servers, add the following lines at the end of the file /usr/lib/unifi/data/system.properties (“X.X.X.X” is my primary, “Y.Y.Y.Y” is my secondary, “HA” is my MongoDB replicationSet name, defined in the configuration above).
Replace the IPs with your configuration and the name of the replicaSet with the one you have defined, if you have modified it.
root@unifi1:~# nano /usr/lib/unifi/data/system.properties
root@unifi2:~# nano /usr/lib/unifi/data/system.properties
db.mongo.local=false
db.mongo.uri=mongodb\://X.X.X.X,Y.Y.Y.Y\:27017/unifi?replicaSet\=HA
reporter-uuid=bb2601fe-ba4c-44ee-b157-c8646bfecdec
statdb.mongo.uri=mongodb\://X.X.X.X,Y.Y.Y.Y\:27017/unifi?replicaSet\=HA
unifi.db.name=HA
Restart the Unifi service on both servers:
root@unifi1:~# service unifi restart
root@unifi2:~# service unifi restart
Connect via your web browser to the WEB interface of both servers. The URLs are as follows (replace by the IPs of your servers):
https://X.X.X.X:8443
https://Y.Y.Y.Y:8443
On each of the servers, go through all the steps of the Unifi configuration wizard, according to your desired configuration:

Indicate your country and the corresponding time zone for your server

Skip directly, we will add the equipment later

“Skip” directly, we will configure Wifi later


Then validate your configuration.

If necessary, enter your ubnt.com credentials, otherwise click on “Skip”.
It’s necessary to connect to both servers via the WEB interface, and to pass the wizards to both servers by entering the same information, to avoid conflicts in the cluster. Once the wizards are finished, connect / disconnect at least once on the interface of each server.

Then restart the Unifi service on both servers:
root@unifi1:~# service unifi restart
root@unifi2:~# service unifi restart
The cluster is now functional. However, it is necessary to synchronize folders between the two servers to avoid conflicts. We’ll see about that right away.
5. SYNCHRONIZATION OF UNIFI FOLDERS BETWEEN THE TWO SERVERS
To do this, we will use the Unix “unison” package which allows a simple synchronization of two folders, in both directions. Including via SSH, since here we will use SSH to synchronize the same folder but on two remote servers.
We will configure the SSH to allow a connection via the root account between the primary server, and the secondary server. The Unifi working folder will be synchronized between the two servers via SSH, so that both servers have the same files.
First, install the following packages on both servers:
root@unifi1:~# apt-get -y install unison openssh-server ssh
root@unifi2:~# apt-get -y install unison openssh-server ssh
Then put a password to the root user on both servers (this is not the case by default, since we are on Ubuntu!).
root@unifi1:~# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root@unifi2:~# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
We will also configure the SSH service to allow the root user to connect. On both servers, open the SSH configuration file:
root@unifi1:~# nano /etc/ssh/sshd_config
root@unifi2:~# nano /etc/ssh/sshd_config
Modify the “PermitRootLogin” line by changing the parameter to “yes” instead of “prohibit-password“.
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
Then restart the ssh service on both servers:
root@unifi1:~# service ssh reload
root@unifi2:~# service ssh reload
On the primary server, execute the following command (in all 3 questions, just press Enter):
root@unifi1:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:d0twWEH2Bzw7cjbBKN4fgyhMO6HoaHu46oFMjEt2y2Y root@unifi1
The key's randomart image is:
+---[RSA 2048]----+
| .==. |
| o .+..=. |
| . + +o+...+.|
|o . . = oo+ O. |
|.= + So. o= = |
|*.= o . o .. |
|o+ E . |
| * . |
|oo.o |
+----[SHA256]-----+
Then run the command “ssh-copy-id root@Y.Y.Y.Y” on your primary server, where Y.Y.Y.Y is the IP of your secondary server.
root@unifi1:~# ssh-copy-id root@Y.Y.Y.Y
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'Y.Y.Y.Y (Y.Y.Y.Y)' can't be established.
ECDSA key fingerprint is SHA256:mU2x0/iTNARkHD6oBKIcRXzBv6MA0KvDmArJmInRzPM.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@51.15.172.29's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@Y.Y.Y.Y'"
and check to make sure that only the key(s) you wanted were added.
Run the following command on the primary server, replacing “Y.Y.Y.Y” with the IP of your secondary server:
root@unifi1:~# /usr/bin/unison -batch /usr/lib/unifi/data/sites/ ssh://root@Y.Y.Y.Y//usr/lib/unifi/data/sites/
The command must return this kind of information:
root@unifi1:~# /usr/bin/unison -batch /usr/lib/unifi/data/sites/ ssh://root@Y.Y.Y.Y//usr/lib/unifi/data/sites/
Contacting server...
Connected [//unifi1//var/lib/unifi/sites -> //unifi2//var/lib/unifi/sites]
Looking for changes
Warning: No archive files were found for these roots, whose canonical names are:
/var/lib/unifi/sites
//unifi2//var/lib/unifi/sites
This can happen either
because this is the first time you have synchronized these roots,
or because you have upgraded Unison to a new version with a different
archive format.
Update detection may take a while on this run if the replicas are
large.
Unison will assume that the 'last synchronized state' of both replicas
was completely empty. This means that any files that are different
will be reported as conflicts, and any files that exist only on one
replica will be judged as new and propagated to the other replica.
If the two replicas are identical, then no changes will be reported.
If you see this message repeatedly, it may be because one of your machines
is getting its address from DHCP, which is causing its host name to change
between synchronizations. See the documentation for the UNISONLOCALHOSTNAME
environment variable for advice on how to correct this.
Donations to the Unison project are gratefully accepted:
http://www.cis.upenn.edu/~bcpierce/unison
Waiting for changes from server
Reconciling changes
file ----> default/map/5bf52bbeeb0e001b540fc8fd
local : file modified on 2018-11-21 at 10:56:14 size 84596 rw-r-----
unifi2 : absent
<---- file default/map/5bf52bc42b14dd1c0704fc3a
local : absent
unifi2 : file modified on 2018-11-21 at 10:56:20 size 84596 rw-r-----
<---- file default/map/5bf52d402b14dd22b6b3825c
local : absent
unifi2 : file modified on 2018-11-21 at 11:02:40 size 84596 rw-r-----
file ----> default/map/5bf52d40eb0e0021d7a683c3
local : file modified on 2018-11-21 at 11:02:40 size 84596 rw-r-----
unifi2 : absent
Propagating updates
UNISON 2.48.3 started propagating changes at 11:06:23.66 on 21 Nov 2018
[BGN] Copying default/map/5bf52bbeeb0e001b540fc8fd from /var/lib/unifi/sites to //unifi2//var/lib/unifi/sites
[BGN] Copying default/map/5bf52bc42b14dd1c0704fc3a from //unifi2//var/lib/unifi/sites to /var/lib/unifi/sites
[BGN] Copying default/map/5bf52d402b14dd22b6b3825c from //unifi2//var/lib/unifi/sites to /var/lib/unifi/sites
[BGN] Copying default/map/5bf52d40eb0e0021d7a683c3 from /var/lib/unifi/sites to //unifi2//var/lib/unifi/sites
Shortcut: copied /var/lib/unifi/sites/default/map/5bf52bc42b14dd1c0704fc3a from local file /var/lib/unifi/sites/default/map/5bf52d40eb0e0021d7a683c3
Shortcut: copied /var/lib/unifi/sites/default/map/5bf52d402b14dd22b6b3825c from local file /var/lib/unifi/sites/default/map/.unison.5bf52bc42b14dd1c0704fc3a.5fa200cbec221d2964bc46cefa7db551.unison.tmp
Shortcut: copied /var/lib/unifi/sites/default/map/5bf52bbeeb0e001b540fc8fd from local file /var/lib/unifi/sites/default/map/5bf52d402b14dd22b6b3825c
Shortcut: copied /var/lib/unifi/sites/default/map/5bf52d40eb0e0021d7a683c3 from local file /var/lib/unifi/sites/default/map/.unison.5bf52bbeeb0e001b540fc8fd.e76285476404edf53d5259a1c9fdcf60.unison.tmp
[END] Copying default/map/5bf52bbeeb0e001b540fc8fd
[END] Copying default/map/5bf52bc42b14dd1c0704fc3a
[END] Copying default/map/5bf52d402b14dd22b6b3825c
[END] Copying default/map/5bf52d40eb0e0021d7a683c3
UNISON 2.48.3 finished propagating changes at 11:06:23.68 on 21 Nov 2018
Saving synchronizer state
Synchronization complete at 11:06:23 (4 items transferred, 0 skipped, 0 failed)
To ensure that the synchronization works properly, create the test.txt file on the secondary server. Then run the synchronization command again on the primary server. The file must appear on the primary server.
root@unifi2:~# touch /usr/lib/unifi/data/sites/test.txt
root@unifi1:~# /usr/bin/unison -batch /usr/lib/unifi/data/sites/ ssh://root@Y.Y.Y.Y//usr/lib/unifi/data/sites/
Contacting server...
Connected [//unifi1//var/lib/unifi/sites -> //unifi2//var/lib/unifi/sites]
Looking for changes
Waiting for changes from server
Reconciling changes
<---- new file test.txt
local : absent
unifi2 : new file modified on 2018-11-20 at 15:09:41 size 0 rw-r--r--
Propagating updates
UNISON 2.48.3 started propagating changes at 15:11:36.53 on 20 Nov 2018
[BGN] Copying test.txt from //unifi2//var/lib/unifi/sites to /var/lib/unifi/sites
[END] Copying test.txt
UNISON 2.48.3 finished propagating changes at 15:11:36.54 on 20 Nov 2018
Saving synchronizer state
Synchronization complete at 15:11:36 (1 item transferred, 0 skipped, 0 failed)
Similarly, by deleting the test.txt file on the primary server and synchronizing again, it must disappear from both servers:
root@unifi1:~# rm /usr/lib/unifi/data/sites/test.txt
root@unifi1:~# /usr/bin/unison -batch /usr/lib/unifi/data/sites/ ssh://root@Y.Y.Y.Y//usr/lib/unifi/data/sites/
Contacting server...
Connected [//unifi1//var/lib/unifi/sites -> //unifi2//var/lib/unifi/sites]
Looking for changes
Waiting for changes from server
Reconciling changes
deleted ----> test.txt
local : deleted
unifi2 : unchanged file modified on 2018-11-20 at 15:11:36 size 0 rw-r--r--
Propagating updates
UNISON 2.48.3 started propagating changes at 15:11:56.11 on 20 Nov 2018
[BGN] Deleting test.txt from //unifi2//var/lib/unifi/sites
[END] Deleting test.txt
UNISON 2.48.3 finished propagating changes at 15:11:56.11 on 20 Nov 2018
Saving synchronizer state
Synchronization complete at 15:11:56 (1 item transferred, 0 skipped, 0 failed)
root@unifi2:~# ls /usr/lib/unifi/data/sites/test.txt
ls: cannot access '/usr/lib/unifi/data/sites/test.txt': No such file or directory
Add the following line in the root crontab on the primary server via the “crontab -e” command
root@unifi1:~# crontab -e
no crontab for root - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/ed
2. /bin/nano <---- easiest
3. /usr/bin/vim.tiny
Choose 1-3 [2]: 2
# m h dom mon dow command
*/2 * * * * /usr/bin/unison -batch /usr/lib/unifi/data/sites/ ssh://root@Y.Y.Y.Y//usr/lib/unifi/data/sites/
Synchronization of Unifi folders is now functional.
Conclusion
Congratulations, you now have a great functional Unifi cluster! So you have two Unifi servers on two different DNS addresses/records, with a Failover IP that you can switch between them. The two Unifi servers are permanently synchronized via the replication of their database and files.
You can make changes on either interface, return to the other server and retrieve these changes.
/!\ Warning: Unifi equipments must be adopted on Failover IP. They automatically switch between one or the other server according to which you point the Failover IP. It’s possible to connect from the Web from either interface, or via the Failover IP. The data will be the same because it will be synchronized between the databases. /!\
The cluster is functional, however some side effects remain. First, a refresh (F5) of your Unifi web page may be necessary when you modify on one server and return to the other, Unifi is not necessarily intended for this kind of operation. Similarly, Maps have some problems when switching from one server to another, I have not yet been able to find a solution to make them fully functional. If you have more information about this, please let me know !
Note that in my case, it was also necessary to configure an interface manually on each server to host the Failover IP. This is a configuration specific to Dedi-Online servers, however it is possible that to host the Failover IP, a particular configuration is required, refer to the documentation related to your hosting platform.
It’s necessary to switch the Failover IP manually, but fortunately we will see in the next article the part 2 of our tutorial, or how to automatically control the Failover IP and switch automatically in case of failure on one of the Unifi or MongoDB services!
In the meantime, feel free to comment if you have any questions or need clarification! @ +
1/ > rs.initiate(cfg) { “ok” : 0, “errmsg” : “Our set name did not match that of 192.168.80.200:27017”, “code” : 103, “codeName” : “NewReplicaSetConfigurationIncompatible” } > db.isMaster() { “ismaster” : false, “secondary” : false, “info” : “Does not have a valid replica set config”, “isreplicaset” : true, “maxBsonObjectSize” : 16777216, “maxMessageSizeBytes” : 48000000, “maxWriteBatchSize” : 1000, “localTime” : ISODate(“2019-04-05T08:35:48.860Z”), “maxWireVersion” : 5, “minWireVersion” : 0, “readOnly” : false, “ok” : 1 } ===> information show how ok or not ok. 2/ after add enter /usr/lib/unifi/data/system.properties and restart service unifi connect web controller no working. i don’t understand. before i have… Read more »
There is a great flaw in this very good documentation.
MongoDB replication requires at least 3 nodes or 2 nodes and an arbiter.
Otherwise when one of the nodes are down the secondary (slave) does not become primary and the unifi service does not work.
mongod.conf and mongodb.conf file in part 1 is it a typo error or is it correct to have 2 different mongoXX.conf files? this file “mongod conf” is mentioned a few times throughout this tutorial. Thank you
I seem to face the same issues here. Not sure if it is a typo or a new config script but it seems to break the current mongodb from running correctly. Not sure if me running it in a 32 bit enviroment makes it different