I’m pretty sure, this question has been answered somehow already, but I’m not finding the correct thread. If this is the case, please point me in the right direction.
Right now I have a home server with an adequate connection (100 mbit down/40mbit up) but the number of users on my nextcloud instance is growing, so I am thinking about moving the nextcloud instance to a vserver, for performance reasons.
But the data stored on the nextcloud instance is about 5TB and I would like to avoid high costs for storage.
Is there a preferred design to run nextcloud on a vserver and have ncdata on my home server (maybe mount the directory of my homerver to the vserver)?
Yes. You can use on a vserver an external storage from your home server.
But that does not really improve the speed. I think your server than got more worse.
My preferred design is not to use it
I think you have got some possibilities.
Find the real bottleneck of your nextcloud. Perhaps it is not the download/upload.
Test speed e.g. with https://fast.com .
a.) Improve your home server with removing your bottleneck.
b.) Move all to a VPS or nextcloud hoster in the internet
Perhaps you can combine a.) and b.) But that makes only sense if you can host perhaps 90% of all data at home and these 90% of data are rare accessed (like an archive). 10% of the data then hosted on the internet and these data are mostly accessed.
Do you have suggestions on how to find the bottleneck? That would be awesome.
Otherwise for option c).
Most of the users sync their files with the clients, but the Interface for calender, Collabora is really unresponsive. I don’t know how to find out which files are accessed frequently and which are not.
A very nice tool for compute the size of directorys is “ncdu”.
Perhaps you can install it and surf the directorys for correct dir-sizes.
Do it match with your nextcloud size and backup size?
Find huge dirs with backup in backup …
Thanks for this recommondation.
ncdu is an awesome tool. But unfortunately it didn’t find the reasons for the neverending backups. The dir sizes match exactly the ones shown in the nextcloud “user” overview.
I wonder why the backup of ncdata is so much slower than Medien, because Median has a size and filenumbers of the same magnitude and both are stored on the same btrfs volume (different subvols).
I found the solution for the endless backups.
There was a daily cronjob chmodding and chowning the files in the nextcloud data dir.
This cronjob was a recommondation someday (No hardened file permissions recommended anymore?).
Now i deactivated the job and the backup is running very fast.