Maybe the sync client just is not as well prepared to deal with this situation.
Definitely could be either I/O or locking. Could you try using iotop during the operation? I think it can be worth it to try redis at this point, specially if the I/O is also low. I wonder whatās the best way to measure mysql locking stats.
Definitely a temporary solution to the problem, even though I would like to find a fix
Especially have a look at i/o-wait which means the process has to wait for i/o-operations on the file system. But with locking-errors in the logs itās quite obvious that your db-server isnāt fast enough for the locking, best thing is to give redis a try.
This last test was uploading all the data witt the sync client and it failsā¦
I uploaded 7GB of data via webUI then when all was done I configured the sync client. Results? The sync client was able to download smoothly those 7GB of data.
I left overnight the rest of the data being uploaded alternatively via webUI and this morning everything was there. Now I will launch the sync client and get everything to the laptops.
Not a beautiful solution but I can live with it. In summary it appears that the sync client has some connectivity problems.
I have another 32GB mSD and another Pi but it is a Pi2. If you want me to perform another test and if you think it is valuable I donāt mind starting from scratch ā¦ these are the times I wished I paid more attention to C programming classes
The sync client does parallel uploads (I think up to 4 by default) and chunking of large files. On a raspberry both can create problems if you want to push through as many files as possible and reach system limits. With the 10000 small files and a simple webdav client you get an impression how many files you can transfer per second.
sorry to chip in all of the sudden, it would be great that the NC client have a configurable option on how many parallel uploads it should use. I mean, we all know that the RPi3 is somewhat limited for certain solutions but at the same time I think there is a substantial amount of users that still use it with NC.
Do we have any leverage to suggest this to be included into the sync client?
The chunk size can be limited by advanced configuration via configuration file:
I didnāt find a similar reference for the parallel uploads. There is a patch that the client does not use chunking in old owncloud versions (propagateupload: Disable parallel chunk upload for owncloud < 8 Ā· owncloud/client@063271e Ā· GitHub), you could probably rewrite the code that it is always disabled, however you would need to build your own client (or modify Nextcloud that it pretends to be a pre-8.0-owncloud). For such modifications, Iād really use a complete test system because itās hard to predict what might go wrong.
Just for testing, Iād just use a simple webdav client (cyberduck, winscp) to evaluate simple webdav upload performance.
The chunk size can be limited by advanced configuration via configuration file:
I didnāt find a similar reference for the parallel uploads. There is a patch that the client does not use chunking in old owncloud versions (propagateupload: Disable parallel chunk upload for owncloud < 8 Ā· owncloud/client@063271e Ā· GitHub), you could probably rewrite the code that it is always disabled, however you would need to build your own client (or modify Nextcloud that it pretends to be a pre-8.0-owncloud). For such modifications, Iād really use a complete test system because itās hard to predict what might go wrong.
Just for testing, Iād just use a simple webdav client (cyberduck, winscp) to evaluate simple webdav upload performance.
I see, that explains it. So uploading through client should be more efficient than uploading through web UI. Well, come to think about it maybe it shouldnāt? Do you know the rationale after doing paralell uploads? at the end of the day the total bandwidth limit will be the same, and if anything you are using more resources on the server side.
I have never used ācyberduckā but I can give it a try, I guess that when you mention āupload performanceā you refer to if at some point the sync client fails to upload data correct?
If we require something more scientific like checking how many io operations can it handle, bandwidth used etc, I might not have the right skill set or experience to perform the test
No problem! I can do the test right now just send me the image and I can start. I imagine that we should test with the sync client only right? We see that with the webUI even if a few timeouts or slowness appear it appears to resume upload smoothly.
Same problem here.
Clean new install, very same config as Edson_Rodrigues (RPi3B, Win7 64bit, ā¦)
Using Berryboot, no GUI installed
Install runs from a fast USB stick. Will change to an USB_SSD within the next week
100GB data, one file with 40GB, several files >5GB
My experience:
Freezes as described in previous posts.
Also when these lags occur and I log-in via SSH (Putty), the RPi is very unresponsive. Takes some seconds til the login appears and some seconds til I can enter the password. CPU-load with htop seems ok. But of course I cant run htop until Iām logged in.
It became much better when I excluded the directory that contains the large files from syncing. Also excluding my Outlook .PST-files helped.
Moving the instance from the SDCard to USB with Berryboot also seems to have helped - runs smoother.
Still often enought the lags described in the previous posts occur (WebGUI unresponsive for a while, desktop client stuck)
Also worth mentining might be that the NC Windows desktop client often enough run on 100% CPU load during the sync process. Rebooted the windows machine to resolve that for several tiimes.
In general I would say that now, as all files are synced since some days the system runs smoother.
But still itās by far less reliable than dropbox.
I know we have a 2GB limit but @wild wrote āone file with 40GB, several files >5GBā ; thus my question how is he working with such big files and NC.