Sorry to hear you’re facing problems
help.nextcloud.com is for home/non-enterprise users. If you’re running a business, paid support can be accessed via portal.nextcloud.com where we can ensure your business keeps running smoothly.
In order to help you as quickly as possible, before clicking Create Topic please provide as much of the below as you can. Feel free to use a pastebin service for logs, otherwise either indent short log examples with four spaces:
Or for longer, use three backticks above and below the code snippet:
Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can
Nextcloud version (eg, 12.0.2): 16.0.4
Operating system and version (eg, Ubuntu 17.04): Ubuntu 16.04.6
Apache or nginx version (eg, Apache 2.4.25): Nginx 220.127.116.11
PHP version (eg, 7.1): 7.2.24
The issue you are facing:
While I download a big file from Nextcloud all other pages hosted on the same Plesk webserver are not reachable anymore, caused by a too high I/O load of the disk.
I monitored the disk usage while the Nextcloud download with
iostat -dx /dev/sda 5
and the %util goes to 100%. When I download big files from a website hosted on the same server, for example, the util never goes so high. So the problem is definitely caused by Nextcloud.
Is this the first time you’ve seen this error? (Y/N): Y
Steps to replicate it:
- Download a big file from your Nextcloud
- Monitor your disk usage with iostat -dx /dev/sda 5
- See a high disk usage
What is creating the high I/O? Depending on what it is, you can probably improve this by using caching (redis for filelocking, the database has also a few options that improve the performance a lot). However, this is normally even more visible on a large number of small files rather than a big file …
A download from my Nextcloud is creating high I/O. When I download another file which is not on Nextcloud, but in a subdirectory on a hosted website for example, it never gets this high.
Yes because Nextcloud is not only delivering the file, it checks entries in the database, locks a file, perhaps some apps are interacting (antivirus), …
So check if it is related to a single process (database, php, webserver) and depending on which one, check the logfiles, perhaps increase log levels…
Okay I understand. There couldn’t be a app interacting, because there is no Antivirus or any other app installed, which “monitors” files.
I monitored but I didn’t see anything unusual - only the nginx process goes a bit higher, but this is really normal when downloading anything from websites. Database keeps beeing low and does not go up.
Hmm Redis for file locking and APCu for file caching might help.
Do you use SQLite as database? Otherwise I would have expected MySQL/MariaDB/PostgreSQL to show up in iotop. In case of SQLite it makes sense that it is contained in php-fpm since it is no dedicated database server/process then and high I/O is expected since AFAIK the whole database file can get rewritten on any access. Consider to migrate to MariaDB in case where the single database tables are split into single files by default and cached. Adding Redis reduces I/O further since without, file locking is done in database, hence the rewritten file(s).