Trouble with WebDAV performance :-(

Ok, there was significant IO and IO waits and I tweaked and tested a lot. I also used the mysqltuner script to get ideas about what I still could improve.

Apparently, MySQL updates to the oc_filecache table where the main cause for this.

The only setting which really made a difference was to set

innodb_flush_log_at_trx_commit=2

to 0 or 2. Even with this setting, it’s still quite slow, but some orders of a magnitude faster and the IO load caused by MySQL got reduced significantly.

(Addendum: It looks like davfs2 or maybe the specific combination of davfs2 and NextCloud WebDAV has to be blamed for the terrible performance. Using Windows Explorer or Konqueror “webdavs://” URLs, access to and navigation through WebDAV is actually quite fast, comparable to the NextCloud web interface. I didn’t perform any transfer / throughput tests, though.) Not sure what davfs2 is doing there. :-/ But maybe it shouldn’t be recommended as a client in the documentation then…)

Apparently, NextCloud executes many micro-transactions if WebDAV operations are performed, which really hit the DB server. The impact of this probably also has to do with the DB’s backend storage’s characteristics. In my case, the storage is backed by a synced DRBD (replicated/distributed block device) device, which probably is close to the worst case scenario if it comes to fsync latency/performance, as the block device has to wait for confirmation from the mirror before reporting “success”.

As I understand, the drawback of the DB config adjustment I did is the loss of guaranteed ACID integrity in case of a kernel / system crash, which also does not sound like a good solution. For the oc_filecache table alone, this probably would not matter too much, but the innodb_flush_log_at_trx_commit setting affects the whole MariaDB server with all databases running on it, so it really appears to be a big compromise to me…

Are there any other options to optimize oc_filecache performance? I wonder if I’m the only one who experiences those performance issues - or maybe noone actually uses WebDAV access to the files managed by NextCloud? Though this also sounds very unlikely to me, so I would expect configuration hints or howtos already in the official documentation (“What to do to actually get usable WebDAV performance from NextCloud”) or at least in many third-party blogs or the like, but this doesn’t seem to be the case…

i think everyone is accessing the files via webdav. but when you use the desktop sync client you don’t care about the speed since the files are synced in the background.

i didn’t use many small files to transfer only one large iso image. but I got 500Mb/s between two ec2 on aws.

I use this MariaDB instance only for Nextcloud and I keep always current database backups. For upload via NC client there is a potential improvement by uploading smaller files together and make a larger insert instead of many. Native webdav probably treats files individually, this would require more changes. If you have ideas, feel free to share them with the developers.

What advantage does Sabre hold over using the native Apache module? Being able to choose which one, via a config flag, might be useful.
Being able to use for example, rsync directly (a la OpenMediaVault) would be equally useful.

It’s written in php? How would you connect the apache module to the database? You have a file index in the filecache-table and all the sharing and permission stuff is in the database as well. A working alternative is to use a different sync solution which syncs files outside the Nextcloud data folder. Use this folder as a external storage within Nextcloud, then you can use the apache webdav-module or even different solutions, s-ftp, syncthing, …

Thanks for the clarity and ideas. :slight_smile:
As well as “standard” encrypted WebDAV backups, I also lftp files into a data subdirectory, then run an occ scan. I accept that the imported files won’t be encrypted(?) but in this case they’re just publicly available JPEGs.
The clarity/idea of using external storage to a local directory, seems like a better option.

Yes, they are then not encrypted (not sure if they are encrypted later automatically), not sure how it handles shared folder etc. The developers recommend not to do that, so I am not sure if there are perhaps more downsides.

My use, in this instance is purely for server backup, so sharing and other features are of no relevance.
Cheers.

:roll_eyes:
The Local folder (sic) doesn’t appear to get encrypted.

With encryption, you must put files through nextcloud and you can’t place them manually.

If it takes about 24 hours to sync less than 100 MB, you’ll also notice this if you’re using the client - your folders will be out-of-sync all the time, I guess…

Maybe it helps a bit if the native client merges operations on several small files, as it was stated in another post in this thread, but I still have the impression that I’m noticing much more massive performance issues than many other users.

I now assume it’s because if the system my nextcloud instance is running on, it’s in a VM on a dual-machine cluster which uses DRBD to keep both cluster machines in sync - probably an fsync is extremely expensive in this setup, but I’ve no influence on this configuration and have to live with it.

So I’ll try running with

innodb_flush_log_at_trx_commit=2

and hope for the best…

Maybe you should look into using ftp, sftp, scp or rsync for this purpose, which - without knowing your additional constraints, of course - all sound more sensible for the mentioned use case to me.

Somewhere I read that share information and permissions or something like that get lost in the process. (Which does not seem to be important for ejsolutions use case anyway, though - which however also leads to the question why she or he is using NextCloud for this use case at all.)

I’m a software developer, but I do not have the slightest idea of how NextCloud works internally - so I’m not sure what ideas I could share with the devs… I cannot assess whether it would be viable or even technically possibly to properly merge multiple WebDAV operations in a single DB transaction, which could have to alleviate the immediate performance problems somewhat, I guess…

Hi I did not read every post but it sounds like I have encountered a similar problem. I wrote a blog about it. Did not find a sollution though.
https://akseliratamo.fi/2018/09/21/nextcloudpi-samba-vs-webdav-speed/

run top or htop on your raspi during syncing with webdav/nextcloud and watch the cpu load.

It’s on an ARM dedicated server, receiving multiple site remote backups that I want to store encrypted. The issue is exacerbated by not being able to get WHM to recognise the WebDAV URL. I’m having to manually push the backups, via rsync mirror, through a WebDAV share (see my other thread)

Additionally, I’m storing a different site’s images, as a backup - that’s the bit that doesn’t really need to be encrypted - not the others.

Nextcloud presents a handy GUI that allows visual checks that the backups have run and look sensible (in terms of size/date etc.) Ideally, performance (throughput) wouldn’t be an issue and the same WebDAV method could be used for all. I don’t need/want multiple methodologies = more scope for breakage.

The core issue remains; to improve WebDAV throughput by tuning parameters or other means.
I’m now looking at alternatives, like OpenMediaVault/Syncthing and full disk LUKS. Here was me thinking NextCloud would be a clean, simple solution. :wink:

you should mention this in the “solution”. because this may add significant latence to your

do you have to sync the 4300 often or just initial?

Makes sense, I’ve just done that.

This was an attempt to move contents from our previous (Apache-moddav-backed) WebDAV server to NextCloud, which is intended to be the successor.

So, in this case I only have to sync it once, but

  1. it’s only a small fraction of all data which has to be synced and
  2. similar amounts of data might also be added any time during normal use of the WebDAV drive.

So a performance like this would not be really usable for us, not even if we’re working with a few files only.

Actually, even interactive performance while using it as a davfs2 mounted file system is really bad compared to the moddav-based solution - we’ll just try if we can get used to this much slower speed and/or will try to use the sync client more.

@GOhrner PS: when i run some tests on aws I saw 25% cpu load for the davfs2 process on the client machine (4 cores). so I guess this program also might to be considered as a bottleneck.