Why is my Nextcloud frontend so slow?

And your server isnā€™t chugging into swap correct?

I donā€™t use apache so if all of that is fine Iā€™m out of ideas. You could try disabling all plugins in nextcloud to see if that does anything. Probably wonā€™t but worth a shot.

1 Like

From what I can read from your post, maybe youā€™re missing the cron container? This will inevitably lead to slow requests, because the webcron is used instead.

Surely, your Synology NAS can cause you trouble when using a reverse-proxy. How about making it work without reverse-proxy first and then adding it later? That way you can rule out the proxy as the source of trouble :slight_smile:


Iā€™ve just recently set-up a Nextcloud on docker and itā€™s unfortunately not as convenient as other dockerized projects. However, I think this i being worked on in the repository already if I read the PRs and issues correctly.

Anyway, here are a few gotchas I encountered that might help you:

  • I needed a lot of containers
    • nextcloud:stable-fpm-alpine (2 times, 1 for the app, 1 for cron)
    • mariadb (the database)
    • redis:alpine
    • nginx:alpine (for serving static assets)
    • jrcs/letsencrypt-nginx-proxy-companion (for Letā€™s Encrypt support)
    • jwilder/nginx-proxy (as a front server, exposed on ports 80 and 443)
  • You probably donā€™t need the last two, if youā€™re planning on using your own reverse-proxy. One can probably also combine nginx-proxy and nginx-alpine, but I wanted to use nginx-proxy with as few configuration files injected as possible, because I like its standard configuration. Additionally, it allows me to easily run other applications on the same docker host if needed and serve them on other domains.
  • I manage those containers in docker-compose which is fairly convenient
  • Be aware that you cannot use ufw (Uncomplicated Firewall) with docker out-of-the-box. I had to:
    • Disable iptables in dockerā€™s daemon.json so that docker doesnā€™t mess with the rules that ufw creates
    • Lock down the firewall with ufw and only allow necessary ports in
    • Create a forwarding rule in iptables so that containers can send traffic out of the system and are reachable from the outside
    • Be careful: Itā€™s really easy to expose, say, your redis instance to the world and itā€™ll be ā€brickedā€œ within minutes by someone who wants to compromise it
  • Additionally, Iā€™m mounting a nginx.conf and uploadsize.conf into the nginx containers. They can be copied and/or adapted from here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/mariadb-cron-redis/fpm

If youā€™re serious about using docker, I can highly recommend to you to take a close look at the example from the official nextcloud/docker repository I linked above and to rely on docker-compose if using multiple containers (which is kinda mandatory for a production setup, youā€™ll need redis and the cron container, as well as the database).


As to your fear about dependencies, I believe this is highly related to the platform youā€™re using. Itā€™s my opinion that you should have a really good reason to run on ARM, because you will run into software thatā€™s commonly used, but doesnā€™t really work that well or not at all on this architecture. If you, however, run on a basic x86 server, it shouldnā€™t be a problem at all to follow one of the many tutorials on setting up Nextcloud. Updating software will in most cases just be apt-get update && apt-get upgrade or setting up unattended-upgrades or both.

Lastly, Iā€™m not sure if Nextcloud can be run in docker-swarm. While I have no knowledge about that, thereā€™s one thing that speaks against scaling Nextcloud horizontally: It uses one central database, one central configuration and one central repository of files (aside from external storage).


I really hope youā€™re able to overcome the performance problems, as theyā€™re not normal and not expected in any reasonable setup of Nextcloud, even if using docker. :slight_smile:

1 Like

What is the current memory_limit set to in php?

root@7ef6dd033c35:/usr/local/etc/php/conf.d# cat memory-limit.ini
memory_limit=512M

  1. Is this php-fpm?

Iā€™m using the nextcloud:18.0 tag, which is according to my analysis based on the image php:7.3-apache-buster.

And your server isnā€™t chugging into swap correct?

$ free -h
total used free shared buff/cache available
Mem: 3,8G 2,2G 697M 37M 875M 1,2G
Swap: 8,0G 2,1G 5,9G

It doesnā€™t look like, or?

I donā€™t use apache so if all of that is fine Iā€™m out of ideas. You could try disabling all plugins in nextcloud to see if that does anything. Probably wonā€™t but worth a shot.

I think iā€™ll try that on my test-instance. This one is similar in performance. Nevertheless, thanks for the hints.

From what I can read from your post, maybe youā€™re missing the cron container?

Ah, nope iā€™m using the hosts cron. Might be an anti-pattern, but it works. Didnā€™t knew i can use an additional cron container.

How about making it work without reverse-proxy first and then adding it later? That way you can rule out the proxy as the source of trouble

I can access the login frontend by using the private server ip address of the docker host and the port. And when doing so, loading the frontend takes about 10s.

Iā€™ve just recently set-up a Nextcloud on docker and itā€™s unfortunately not as convenient as other dockerized projects. However, I think this i being worked on in the repository already if I read the PRs and issues correctly.

My compose file is ~160 lines long. Iā€™m improving it since over two years now. It contains, the following services:

  • nextcloud: the main container/webfrontend
    • volumes:
      • data: mounted via docker-volume-netshare and cifs driver on my NAS
      • config: mounted via docker-volume-netshare and cifs driver on my NAS
      • base: named volume on my server (SSD)
      • apps : mounted via docker-volume-netshare and cifs driver on my NAS
  • mariadb: the database
    • volumes
    • db:/var/lib/mysql:rw # named volume
    • ./my.cnf:/etc/mysql/my.cnf # custom db configuration
  • automysqlbackup: a database backup container
    • volumes
      • mysqlbackup:/backup:rw # docker-volume-netshare + cifs driver
  • onlyoffice: well, itā€™s onlyoffice
    • no volumes
  • phpmyadmin: only started in rare cases if required
    • no volumes
  • redis
    • no volumes

my comments on your container with an arrow ā†’

  • nextcloud:stable-fpm-alpine (2 times, 1 for the app, 1 for cron) ā†’ i use only one. cron is done by my host. Not sure if this is correct, but it works. I guess within a swarm i should use the cron container.
  • mariadb (the database) ā†’ jupp, i have a container in my compose file
  • redis:alpine ā†’ same here
  • nginx:alpine (for serving static assets) ā†’ this is covered by my NAS reverse proxy
  • jrcs/letsencrypt-nginx-proxy-companion (for Letā€™s Encrypt support) ā†’ is done very conveniently by the Synology NAS
  • jwilder/nginx-proxy (as a front server, exposed on ports 80 and 443) ā†’ same as above

ā€¦ it allows me to easily run other applications on the same docker host if needed and serve them on other domains.

thatā€™s actually quite interesting, but again solved in a very convenient way by the Synology NAS. But maybe it would be worth to take a look on it and compare the performance of it.

Be aware that you cannot use ufw

might be a naive question, but i have a router with only the necessary ports opened. Shouldnā€™t this be sufficient?

If youā€™re serious about using docker, I can highly recommend to you to take a close look at the example from the official nextcloud/docker repository

Indeed, very interesting!

ā€¦ a really good reason to run on ARM ā€¦

The reason was money. A few years ago, when i started, i started with only the NAS. And of cource i started with a cheap model. Those cheap models were always ARM-based (and still are maybe, i donā€™t know). As already mentioned, while trying to run a few ā€œserver-appsā€ on this NAS, i ran very often into really nasty dependency problems (including attempts to compile dependent libraries etc.). Therefore i decided to use x86 based machines for my ā€œproductionā€ server and to use virtualization. Docker is kind of a virtualization (please donā€™t start a discussion about that) and it worked and works very well for me (at least for other apps like plex or gitea).

Lastly, Iā€™m not sure if Nextcloud can be run in docker-swarm.

pff, me neither. I tried it with my test instance (docker stack deploy -c docker-compose.yml) but i didnā€™t succeed on a single-node swarm (scaling was planned for later).

It uses one central database, one central configuration and one central repository of files

My idea was to use (and iā€™m already using it) docker-volume-netshare for the centralized volumes. Regarding the database: Here i was thinking about setting up a galera cluster, but i donā€™t know how easy or complicated it is to set this up within a docker swarm. From my understanding there should be a simple receipt to do it, but i havenā€™t had the time to read the manuals and tutorials on that topic.

Thanks for your magnificent input! I really appreciate it!

Itā€™s using 2GB swap and you have an overall memory of 3.8GB so thatā€™s over 50% in swap.

No thatā€™s not a good thing. Cause of your problems? Who knows. Probably.

Iā€™d say set the memory_limit to 1024 just for kicks and giggles because that helped me but with that much swap being used I wouldnā€™t bother.

Last thing Iā€™d check is if you have a high I/O wait in top when that initial page load happens. If you do, theres your problem. Either the swap or slow drive. If you donā€™t Iā€™d check to see if your server is running in power saving mode and has to burst to performance which can take a bit sometimes.

Good luck!

Your router probably includes a basic firewall, and as long as you close all ports that donā€™t run public services, that should be good enough for basic protection.

Yeah, I agree. There are also other reasons, like using a device with a useful size and shape (like Raspberry Pi).

I second that. Just to be sure: You mount your data, config and apps via cifs from your NAS over the network? So every request for data has to travel your home network from Nextcloud-machine to NAS? I have no experience with cifs mounts, but could imagine that these can be performance bottlenecks as well, especially if the network is in between (even more so if your desktop server is connected via WiFi) ā€¦ Iā€™d try putting the machine running Nextcloud next to the NAS serving the data and connect both via ethernet cable and see how that works.

Hi,

so i did a vmstat -S M 1 100 > vmstat.log and in parallel i opened my frontpage with chrome (emptied cache before). As far as i can see, swapping looks good and there is no critical waiting i/o. But the colums in (interrupts) and cs (context switches) looks high. What i read in a serverfault forum is, that context switches are normal.

if your server is running in power saving mode

I search the net i can find anything about that. thanks for the tip.

r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 2254 1207 54 655 0 0 38 49 11 2 6 3 91 0 0
0 0 2254 1222 54 655 0 0 0 316 1130 2774 2 1 97 0 0
0 0 2254 1223 54 655 0 0 0 48 818 1673 1 1 98 0 0
0 0 2254 1223 54 655 0 0 0 36 784 1572 1 1 99 0 0
0 0 2254 1223 54 655 0 0 0 0 1862 4210 1 2 97 0 0
0 0 2254 1223 54 655 0 0 0 0 3874 9094 5 4 92 0 0
0 0 2254 1220 54 655 0 0 0 0 3166 7171 5 4 91 0 0
1 0 2254 1219 54 655 0 0 8 0 8083 20812 12 11 77 0 0
0 0 2254 1214 54 655 0 0 0 0 8811 22691 10 9 81 0 0
0 0 2254 1206 54 655 0 0 0 36 6775 17573 9 7 85 0 0
0 0 2254 1202 54 655 0 0 0 12 5043 12409 5 6 90 0 0
0 0 2254 1201 54 655 0 0 0 0 8089 21600 10 10 80 0 0
0 0 2254 1203 54 655 0 0 0 0 6384 14844 15 7 77 0 0
4 0 2254 1192 54 655 0 0 8 0 6964 17029 12 7 81 0 0
1 0 2254 1188 54 655 0 0 0 68 5169 11414 12 5 83 0 0
1 0 2254 1179 54 655 0 0 0 0 5753 10679 15 6 79 0 0
0 0 2254 1174 54 655 0 0 12 0 7443 14280 14 6 79 1 0
1 1 2254 1166 54 652 0 0 0 28 7241 16535 20 9 71 0 0
1 0 2254 1169 54 655 0 0 0 0 9515 17480 30 11 57 1 0
0 0 2254 1171 54 655 0 0 0 2104 6925 16149 8 6 85 0 0
0 0 2254 1171 54 655 0 0 0 12 7475 19194 7 7 87 0 0
7 0 2254 1102 54 655 0 0 76 0 7186 18565 15 8 76 0 0
6 0 2254 1103 54 655 0 0 4 0 6298 15608 37 9 54 0 0
8 0 2254 1164 54 655 0 0 20 408 6617 17190 11 8 81 1 0
1 0 2254 1164 54 655 0 0 8 188 6781 16646 10 7 83 0 0
1 0 2254 1165 54 655 0 0 64 268 4863 11860 9 5 85 1 0
0 0 2254 1183 54 655 0 0 0 0 1765 3673 1 2 97 0 0
0 0 2254 1183 54 655 0 0 0 0 3142 6643 3 3 94 0 0
2 0 2254 1189 54 655 0 0 0 0 1692 4071 7 7 85 0 0
1 0 2254 1201 54 655 0 0 0 280 989 1941 3 2 95 0 0

Iā€™d try putting the machine running Nextcloud next to the NAS serving the data and connect both via ethernet cable and see how that works.

They are connected via Gigabit-LAN. There is only one switch in between.

This is the iperf result when running from server ā†’ NAS

------------------------------------------------------------
Client connecting to 192.168.1.200, TCP port 4999
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.201 port 35536 connected with 192.168.1.200 port 4999
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   940 Mbits/sec

just one further remark. I was able to deploy my test-instance on a single node docker swarm and so far it is working - i am able to access the frontpage and login. Will make further tests and report back.

I donā€™t know how to read vmstat and you should be looking inside the VM anyway for this information as the host is not the same thing as the guest.

Context switching is not a good sign either. Context switching is an expensive operation.

Anywho, good luck with this. I would personally trash the entire thing and start from scratch on baremetal or a base installation on KVM/LXC.

Iā€™m getting a headache and pain in the guts when i hear/read baremetal. But when this is my last Option, well sure i would try it again.

I donā€™t know about KVM/LXC. Are these commercial VM-Solutions? Can i make a host-redundant solution with that?

In case of baremetal, what specs (CPU, RAM) would you recommend for a 5-10 user soho installation, if i want to have load page times between 1-2 s?

Dunno there is no official documentation on this.

I was hoping you could tell from your experience ;-). But maybe thatā€™s a separate thread.

No every installation is different and I certainly havenā€™t tested low end hardware.

I pushed the memory limit to 1024 on my test instance, but the lowest loading time was around 12s.

After i fiddled around a bit with my browser settings i disabled adblock, privacy badger and the firefox integrated tracking protection for that site. After that changes i was down to ~6s.

Afterwards i disabled all apps, and had a load time of 1.5-2s ! Bingo!

Now, slowly enabling back one app after another i was able to identify some apps, that add a few seconds to the loading time:

  • federation: around 1.5s (irregardless of the app loaded)

Interestingly disabling the following apps made the loading slower (???)

  • Accessibility (~1s)

In summary i disabled the following apps, which i donā€™t really use and thus reduced the mean loading time from 12s to 4.5s, which is ok for me:

  • Announcement center
  • Federation
  • First run wizard
  • News
  • Nextcloud announcements
  • Phone Sync
  • Privacy
  • Recommendations
  • Social
  • Update notification
  • Usage survey

Iā€™m leaving the memory_limit at the standard value of 512MB as i donā€™t see a benefit of increasing it.

Thanks a lot for your hints!

2 Likes

Hello, itā€™s me again :slight_smile:

just wanted to add my latest update on this topic (partly posted also here Nextcloud really slow after installation - #6 by besendorf):

Meanwhile, i made a ā€œbare-metalā€ test installation on the same host, where my docker container are running. And the bare-metal nextcloud-instance feels lightning fast - sometimes the load times are below 1s. Iā€™m already thinking about moving back to bare-metal or considering the use of VMā€™s like proxmox. But somehow i still donā€™t want to give up docker, because i really like the concept.

Hi I know this is an old post but I have NextCloud running on a very low-end Synology NAS although it is slow itā€™s not too slow for my purposes, wanted to let you know I sovled some performace issues recently I had LDAP running, I disabled this and itā€™s like night and day, perhaps test this as well

Just my thoughts

1 Like

External S3 access can also make access to Files app very slow when it is set to update every time. In the setting of the s3 integration you can turn that off. It shaved 3-4 seconds off my response time with Files. It is now responding under 1.5 seconds almost consistently (with Redis).

God. I see too much of @Paradox551 typeā€™s that assume weā€™re morons and need 24/7 ā€œsupportā€ from themā€¦ our saviours. :anger:

2 Likes