Migrating UniFi Controller to Kubernetes
The old UniFi Controller and its required Mongo DB have been a bit
of a hassle to keep updated while running directly on the host OS in
my little homelab server,
so the time has come to migrate this to the new
linuxserver.io/docker-unifi-network-application
on my little Kubernetes cluster
on my new Kubernetes cluster.
Warning
Beware of outdated documentation, most articles out there like Install Unifi Controller on Kubernetes, are based on the deprecated linuxserver/unifi-controller, while others like setting up the UniFi Network Controller using Docker are using jacobalberty/unifi-docker which was quite outdated until recently.
Deploy the Unifi Network Application
System requirements
The Unifi Network Application requires a Mongo DB backend and both will need writeable directories and a dedicated user:
# groupadd unifi -g 119
# useradd unifi -u 119 -g 119 -s /usr/sbin/nologin
# mkdir -p /home/k8s/unifi/config /home/k8s/unifi/mongodb
# vi /home/k8s/unifi/init-mongo.sh
# chown -R unifi:unifi /home/k8s/unifi
# ls -lan /home/k8s/unifi
total 4
drwxr-xr-x 1 119 119 52 Dec 31 16:06 .
drwxr-xr-x 1 0 0 264 Dec 31 16:05 ..
drwxr-xr-x 1 119 119 0 Dec 31 16:05 config
-rw-r--r-- 1 119 119 425 Dec 31 16:06 init-mongo.sh
drwxr-xr-x 1 119 119 0 Dec 31 16:05 mongodb
Note the UID/GID (119) to be used later.
Create the script /home/k8s/unifi/init-mongo.sh using the exact
content from the
Setting Up Your External Database
documentation of
linuxserver/unifi-network-application:
Kubernetes deployment
There is no ready-to-use Kubernetes deployment in the documentation of linuxserver/unifi-network-application, or anywhere else I could find. The following deployment is based on the recommended docker-compose and parts of previous Kubernetes deployments:
- Plex Media Server mounts multiple directories and exposes multiple TCP and UDP ports.
- InfluxDB and Grafana has services that depend on conneting with others over HTTP.
- Audiobookshelf
has the
websocketrequirement. - Kubernetes Dashboard enables HTTPS in the backend and disables TLS validation.
In addition to deploying the right set of objects, there are very specific requirements in terms of which version of MongoDB can be used depending on the version of the UniFi Network Application that is deployed. Check the correct pairs of versions under the Additional information section of the latest release of UniFi Network Application in the latest linuxserver/unifi-network-application release; e.g. 9.0.114 specifies that Version 9.0 and newer supports up to MongoDB 8.0 and those are the versions used here.
UniFi Network Application deployment.
| unifi-network-app.yaml | |
|---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | |
Troubleshooting
Putting this deployment together was a bit of a process, so here are some notes and warnings about the troubles that had to be sorted out along the way:
Warning
Double-check that the Unifi Application data directory is mounted
exactly on /config; otherwise the application will create an
ephemeral directory that will be discarded when restarting the pod.
Once this data is lost, the application has to be setup again, and
every access point adopted since the last backup has to be factory
reset to be readopted.
This Unifi image does not support running rootless.
Attempting to set securityContext (as is done for the mongodb
image) will result in fatal errors and crash-loop:
This image does not seem to use the system.properties files.
The ~MONGO_...~ strings in the system.properties files
should be replaced
with the values of the environment variables set in the
deployment, but they are not:
$ kubectl -n unifi exec \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') \
-- cat /defaults/system.properties | grep MONGO
db.mongo.uri=mongodb://~MONGO_USER~:~MONGO_PASS~@~MONGO_HOST~:~MONGO_PORT~/~MONGO_DBNAME~?tls=~MONGO_TLS~~MONGO_AUTHSOURCE~
statdb.mongo.uri=mongodb://~MONGO_USER~:~MONGO_PASS~@~MONGO_HOST~:~MONGO_PORT~/~MONGO_DBNAME~_stat?tls=~MONGO_TLS~~MONGO_AUTHSOURCE~
unifi.db.name=~MONGO_DBNAME~
$ kubectl -n unifi exec \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') \
-- cat /config/data/system.properties
cat: /config/data/system.properties: No such file or directory
command terminated with exit code 1
Yet the environment variables are correctly set in the running pod:
$ kubectl -n unifi exec \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') \
-- printenv | grep MONGO
MONGO_PORT=27017
MONGO_PASS=*************************
MONGO_USER=unifi
MONGO_HOST=mongo-svc
MONGO_AUTHSOURCE=admin
MONGO_DBNAME=unifi
MONGO_SVC_SERVICE_PORT=27017
MONGO_SVC_PORT_27017_TCP_ADDR=10.104.94.112
MONGO_SVC_PORT=tcp://10.104.94.112:27017
MONGO_SVC_PORT_27017_TCP_PROTO=tcp
MONGO_SVC_SERVICE_HOST=10.104.94.112
MONGO_SVC_PORT_27017_TCP=tcp://10.104.94.112:27017
MONGO_SVC_PORT_27017_TCP_PORT=27017
$ kubectl -n unifi exec \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') \
-- cat /run/s6/container_environment/MONGO_PORT
27017
Pay close attention to how the pods are connected.
Misconfiguration in either the Mongo DB service port or the
MONGO_HOST value in the Unify deployment can easily lead to
the Unifi application failing to start because it's not able to
connect to MondoDB:
$ kubectl -n unifi logs \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') -f
[migrations] started
[migrations] no migrations found
───────────────────────────────────────
██╗ ███████╗██╗ ██████╗
██║ ██╔════╝██║██╔═══██╗
██║ ███████╗██║██║ ██║
██║ ╚════██║██║██║ ██║
███████╗███████║██║╚██████╔╝
╚══════╝╚══════╝╚═╝ ╚═════╝
Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID: 119
User GID: 119
───────────────────────────────────────
Linuxserver.io version: 8.6.9-ls73
Build-date: 2024-12-24T17:37:56+00:00
───────────────────────────────────────
*** Waiting for MONGO_HOST mongo-svc to be reachable. ***
*** Defined MONGO_HOST mongo-svc is not reachable, cannot proceed. ***
The environment variables are correctly set in the running pod:
$ kubectl -n unifi exec \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') \
-- nc -zv mongo-svc 27017
nc: connect to mongo-svc (10.104.94.112) port 27017 (tcp) failed: Connection refused
command terminated with exit code 1
The service is reachable instead of mongo-svc:27017
only when the mongo deployment has targetPort: 27017
Clear the Mondo DB every time deployment fails.
Scripts in the /docker-entrypoint-initdb.d folder will be executed
only if the database has never been initialized before. If the
deployment fails at any point, delete the contents of
/home/k8s/unifi/mongodb so right before reapplying the deployment.
Otherwise, if the database has been previously incorrectly intialized,
the user unifi will not be found and the unifi application will be
constantly re-trying and logging authentication errors:
$ kubectl -n unifi logs \
$(kubectl get pods -n unifi | grep unifi | cut -f1 -d' ') -f
[migrations] started
[migrations] no migrations found
───────────────────────────────────────
██╗ ███████╗██╗ ██████╗
██║ ██╔════╝██║██╔═══██╗
██║ ███████╗██║██║ ██║
██║ ╚════██║██║██║ ██║
███████╗███████║██║╚██████╔╝
╚══════╝╚══════╝╚═╝ ╚═════╝
Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID: 119
User GID: 119
───────────────────────────────────────
Linuxserver.io version: 8.6.9-ls73
Build-date: 2024-12-24T17:37:56+00:00
───────────────────────────────────────
*** Waiting for MONGO_HOST mongo-svc.unifi to be reachable. ***
Generating 4,096 bit RSA key pair and self-signed certificate (SHA384withRSA) with a validity of 3,650 days
for: CN=unifi
[custom-init] No custom files found, skipping...
Exception in thread "launcher" java.lang.IllegalStateException: Tomcat failed to start up
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongoRuntimeService' defined in com.ubnt.service.db.CoreDatabaseSpringContext: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='unifi', source='admin', password=<hidden>, mechanismProperties=<hidden>}
Caused by: com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='unifi', source='admin', password=<hidden>, mechanismProperties=<hidden>}
Caused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongo-svc.unifi:27017. The full response is {"ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed"}
The full response is in JSON format; it doesn't tell much:
The reason behind the error is clearly stated in the Mondo DB logs,
which are conveniently in JSON format so they can be piped to jq:
$ kubectl -n unifi logs \
$(kubectl get pods -n unifi | grep mongo | cut -f1 -d' ') \
| grep ACCESS | head -2 | jq
{
"t": {
"$date": "2024-12-31T20:18:09.557+00:00"
},
"s": "I",
"c": "ACCESS",
"id": 20251,
"ctx": "conn3",
"msg": "Supported SASL mechanisms requested for unknown user",
"attr": {
"user": "unifi@admin"
}
}
{
"t": {
"$date": "2024-12-31T20:18:09.558+00:00"
},
"s": "I",
"c": "ACCESS",
"id": 20249,
"ctx": "conn3",
"msg": "Authentication failed",
"attr": {
"mechanism": "SCRAM-SHA-256",
"speculative": true,
"principalName": "unifi",
"authenticationDatabase": "admin",
"remote": "10.244.0.85:53330",
"extraInfo": {},
"error": "UserNotFound: Could not find user \"unifi\" for db \"admin\""
}
}
The user unifi was not found because the database was not initialized
correctly, because it failed to initialize in a previous iteration,
because the mongo-init volume containing the script was not mounted
in the mongo container.
Make sure to enable HTTPS backend protocol in Nginx.
Otherwise, Nginx will be unable to connect to unifi because it will reject plain HTTP requests on port 8443:
The (better) solution is to enable HTTPS as the backend protocol and instruct Nginx to skip TLS certification validation. This is what the Kubernetes Dashboard deployment does as well; in fact that's where I found those 3 lines.
Do not bother trying to enable plain HTTP UI on port 8880.
Since UniFi 8.2 (at least) it is no longer possible to disable the HTTP to HTTPS redirect.
Final result
Apply the deployment and wait a few minutes for services to start:
$ kubectl apply -f unifi-network-app.yaml
namespace/unifi created
persistentvolume/mongo-pv-data created
persistentvolume/mongo-pv-init created
persistentvolumeclaim/mongo-pvc-data created
persistentvolumeclaim/mongo-pvc-init created
deployment.apps/mongo created
service/mongo-svc created
persistentvolume/unifi-pv-config created
persistentvolumeclaim/unifi-pvc-config created
deployment.apps/unifi created
service/unifi-tcp created
service/unifi-udp created
ingress.networking.k8s.io/unifi-ingress created
$ kubectl get all -n unifi
NAME READY STATUS RESTARTS AGE
pod/cm-acme-http-solver-w26rm 1/1 Running 0 36s
pod/mongo-564774d869-dfk7h 1/1 Running 0 36s
pod/unifi-584f4847c7-vpthl 1/1 Running 0 36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongo-svc NodePort 10.103.150.110 <none> 27017:32717/TCP 37s
service/unifi-tcp LoadBalancer 10.105.232.48 192.168.0.173 6789:31231/TCP,8080:32034/TCP,8443:30909/TCP 37s
service/unifi-udp LoadBalancer 10.108.54.45 192.168.0.173 3478:31805/UDP,10001:32694/UDP,1900:30234/UDP 37s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongo 1/1 1 1 37s
deployment.apps/unifi 1/1 1 1 37s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongo-564774d869 1 1 1 37s
replicaset.apps/unifi-584f4847c7 1 1 1 37s
If all goes well, there will be no errors in the logs and the web UI will be available at https://uni.ssl.uu.am/
[migrations] started
[migrations] no migrations found
───────────────────────────────────────
██╗ ███████╗██╗ ██████╗
██║ ██╔════╝██║██╔═══██╗
██║ ███████╗██║██║ ██║
██║ ╚════██║██║██║ ██║
███████╗███████║██║╚██████╔╝
╚══════╝╚══════╝╚═╝ ╚═════╝
Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID: 119
User GID: 119
───────────────────────────────────────
Linuxserver.io version: 8.6.9-ls73
Build-date: 2024-12-24T17:37:56+00:00
───────────────────────────────────────
*** Waiting for MONGO_HOST mongo-svc to be reachable. ***
Generating 4,096 bit RSA key pair and self-signed certificate (SHA384withRSA) with a validity of 3,650 days
for: CN=unifi
[custom-init] No custom files found, skipping...
[ls.io-init] done.
From this point on, follow the documentation for UniFi - Backups and Migration to migrate the current site from the old controller to the new UniFi Network Application.
December 2025 Update to v10
Updating ther UniFi Network application across minor versions was a simple operation, performed twice (from 9.0.114 to 9.3.45 and then to 9.4.19 only two days later) when Migration to new ISP, without it being deemed worth documenting the process.
Updating the UniFi Network application from 9.4.19 to the v10.x branch is a much more significant platform shift, thus worth documenting in more detail. As of late December 2025, version 10.0.160 is the established stable release for this branch. A direct upgrade from version 9.4.19 to 10.0.162 is supported, no intermediate updates are required for the application itself.
While v10.0.160 is considered stable for general release. The newer release v10.0.162 reached stable status on December 9 and is highly recommended as it contains critical bug fixes for WiFi blackout schedules and WAN monitoring. The v10.x branch fully supports the current MongoDB 8.0.0 and Java 17/21 setup, so those component need not be updated.
What is recommended, when updating across major versions of the UniFi Network application, is to make full backups of the local storage of both Mongo and UniFi Network:
-
In the UniFi Network application, go to Settings > System > Backups and download a Settings Only backup file (
.unf).- If this takes too long, just check that the newest automatic backup under
/home/k8s/unifi/config/data/backup/autobackupis not too old.
- If this takes too long, just check that the newest automatic backup under
-
Stop the UniFi Network deployment, then the Mondo deployment.
-
Make a full backup of both deployments local storage (under
/home/k8s) -
Update the UniFi Network deployment manifest to version v10.0.162 and apply it.
-
Restart the Mongo deployment.
-
After a minute, restart the UniFi Network deployment.
If some of the access points show up as Offline, this may be incorrect. Check whether
they are accessible via SSH, that their configuration (cfg/mgmt) points to the correct
IP address (the LoadBalancer IP for the unifi-svc service) and that they report
themselves as Connected when running info in their shell:
U7Lite:~# info
Model: U7-Lite
Version: 8.3.2.18064
MAC Address: 84:78:48:86:4a:ac
IP Address: 192.168.0.138
Hostname: U7Lite
Uptime: 113831 seconds
NTP: Synchronized
Status: Connected (http://192.168.0.173:8080/inform)
All access points should show up as Connected after a few minutes.
