top or htop on your raspi during syncing with webdav/nextcloud and watch the cpu load.
It’s on an ARM dedicated server, receiving multiple site remote backups that I want to store encrypted. The issue is exacerbated by not being able to get WHM to recognise the WebDAV URL. I’m having to manually push the backups, via rsync mirror, through a WebDAV share (see my other thread)
Additionally, I’m storing a different site’s images, as a backup - that’s the bit that doesn’t really need to be encrypted - not the others.
Nextcloud presents a handy GUI that allows visual checks that the backups have run and look sensible (in terms of size/date etc.) Ideally, performance (throughput) wouldn’t be an issue and the same WebDAV method could be used for all. I don’t need/want multiple methodologies = more scope for breakage.
The core issue remains; to improve WebDAV throughput by tuning parameters or other means.
I’m now looking at alternatives, like OpenMediaVault/Syncthing and full disk LUKS. Here was me thinking NextCloud would be a clean, simple solution.
you should mention this in the “solution”. because this may add significant latence to your
do you have to sync the 4300 often or just initial?
Makes sense, I’ve just done that.
This was an attempt to move contents from our previous (Apache-moddav-backed) WebDAV server to NextCloud, which is intended to be the successor.
So, in this case I only have to sync it once, but
- it’s only a small fraction of all data which has to be synced and
- similar amounts of data might also be added any time during normal use of the WebDAV drive.
So a performance like this would not be really usable for us, not even if we’re working with a few files only.
Actually, even interactive performance while using it as a davfs2 mounted file system is really bad compared to the moddav-based solution - we’ll just try if we can get used to this much slower speed and/or will try to use the sync client more.
@GOhrner PS: when i run some tests on aws I saw 25% cpu load for the davfs2 process on the client machine (4 cores). so I guess this program also might to be considered as a bottleneck.