Hi, this is not so much a support question as I am asking for your experience and/or technical background information:
I am using the desktop client on my machines to synchronize all kinds of different types of files: Documents, text notes, and a lot of various savegame files
As I am currently restructuring my setup anyway, I am wondering about the performance implications of synchronizing a lot of smaller folders vs one big folder (from which things would then be symlinked).
Wondering that I realized that I do not really know how the synchronization works, in particular how it figures out whether something has changed on the server or not. All I know is that having many folders will incur many separate HTTP requests to the Nextcloud server. But it does not look to me as each file checked results in one request, rather that each synchronized folder results in one request. Which would mean that only having one synchronized folder will result in only one HTTP request per sync interval?
In that case, client and server must somehow check whether any of those many files in that synchronized folder has been modified on either side. Does the client transmit the latest modification time of any of those files? Or the result of a hash tree of all the files?
Depending on how this works it could mean that having one large folder to synchronize may be more, less or equally efficient than having multiple folders in network or CPU usage. So, I am just wondering if any of this is the case and if any of you have made some experience with regards to those two methods