Nextcloud version : 16.0.5
Operating system and version : Debian 9
nginx version : 1.17.4
PHP version : 7.3
The issue you are facing:
I have been on trouble with Nextcloud upload/download speed a long time, at home server and now on hosted machine.
This time i decided to install from tutorial, its copied 1:1 from this link ( https://www.c-rieger.de/nextcloud-installation-guide-debian/ ) There is no difference, Still poor network speed. I have done tens of times my own config, hacks etc. No difference much…
Upload/Download Nextcloud 10~/10~mbits
Upload/Download SFTP: Maxes out my home connection 500/500mbits
CPU usage is 5-10% and there is enough of free ram. Im using OVH Block Storage. There is no speed difference if im using local SSD or Block Storage. Server and Block Storage are in same DC btw.
The output of your Nextcloud log in Admin > Logging:
No errors
The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):
Config: https://pastebin.com/raw/28JXX3sr
The output of your Apache/nginx/system log in /var/log/____:
Redis log: https://pastebin.com/raw/60YkeNwR
Nginx error log: https://pastebin.com/raw/vsbCk7TD
PHP-FPM error log: https://pastebin.com/raw/gWxnU4n3 ( just logs from restarts)
Did you optimize the settings of your database? The cache should be large enough, especially if you still have free RAM. Then for monitoring, you could try to investigate more where the bottleneck is, e.g. network or i/o operations.
I’m not sure if an upstream response is buffered to a temporary file warning is hinting to a problem or not. If that leads to a slow down, you could think about disabling buffering (could have other consequences) or do more directly in RAM without using the disk. I didn’t have the time to read more on that, so these ideas could be completely wrong.
Don’t expect the same speed as for SFTP speeds, if it’s just for your system, you should however reach around 100 Mbit/s.
You restarted nginx after the config change, right? Sorry, just want to make sure.
I noticed these two warnings in your redis log:
22899:M 27 Sep 19:41:23.783 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
22899:M 27 Sep 19:41:23.783 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Which come up with every start of Redis. Could you increase the mentioned values and check the performance again?
For the first error message you can run the following command: sudo echo 1024 > /proc/sys/net/core/somaxconn
In addition to that you can create a file /etc/sysctl.d/redis.conf and write the following into that file: net.core.somaxconn=1024
I set both values to 1024 on my server and also increased the redis setting for tcp-backlog, but you can of course stick with the value 512, which is perfectly fine for the redis default of tcp-backlog 511
The other error message explains the solution itself: sudo echo never > /sys/kernel/mm/transparent_hugepage/enabled
What kind of files are you up- or downloading? Are these a bigger amount of small files or bigger files?
Just asking because webdav is not good at handling many small files, from what I heard. And it might be worth to test with one big file.
I mean it can’t be Nextcloud in general. I reach the full upload and download speed my network card can handle. Not sure if some configurations from C-Rieger are not that optimal then.
Is there another netdata graph suspicious during upload, which could narrow down the real bottleneck?
Just a stupid thought (but at this point maybe better than no thought): could the web server be bound to another network interface, which doesn’t provide the higher up-/ download speeds (wifi)?
Did you configure any routes which are not optimal? Maybe run a tracert / traceroute to see the used route and rule out that there are slow network paths?
And your device you are testing the download/ upload rates with is definitely not using wifi for that, right?
Like i said, any other type file transfer is perfectly fast. Even if i download file from usual directory with http. Server is 6 hops away always. Every hop is 6ms-15ms.
Yep, no wifi involved here
Yup, fast as it can be
Only problem is uploading to nextcloud, no matter if using web browser or nextcloud client. ~10mbit/sec
EDIT: Finally found a solution!!!
Disable http2 on nginx
Getting now ~500mbit/sec upload to nextcloud server. Huhh, unreal. Spent like forever finding whats wrong with this server.
But why nginx http2 making upload slower? I have not seen on other places where i use nginx.
Thanks a lot for your persaverance! Although I did not find this thread searching for a solution for slow uploads (my problem was generic slow navigation in Nextcloud), your solution of deactivating http2 (I’m on Apache) did solve my issue.
Had looked into a perhaps faulty redis configuration, I/O performance, CPU load, network config, all to no avail. Any then hit your post, so thanks a lot for posting your finding. My weekend has been made!
I am new to Nginx. I have been using apache2. Can you tell me how you disabled http2 on nginx? I decided to try nginx because I have this same issue where I can’t get this thing to upload faster than 25mb/s when the same file will copy to my windows server and ftp at 113mb/s. I would really appreciate your help.
you simply remove all http2 occurences after the listen thing:
If you have multiple virtual hosts all listening to the same port (443 for example), then you will need to remove http2 from every single listen command.
Deactivating http2 however shouldn’t be the real “solution”, as the web-ui will become noticably slower with it deactivated.
I have removed http2 from all configs for all my proxy hosts and disabled the http2 switch in the GUI for nginx proxy manager and then restarted my VM running nginx proxy manager
after the restart i checked all the config files for my proxy hosts and http2 is missing from the files so i know the changes stuck within the file but this has not changed by 30mbps limit on transfer speeds