Moving large chunk of data to external storage

I want to reorganize files within the folderstructure of my NC account.
This includes moving several hundred GB into an external storage (webDAV).

What would be the best way to go?
Moving the folder via WebUI runs into a timeout with parts of the folder copied over. So would moving the files on server/file system level and do a files:scan afterwards do the trick?
Does that alter the timestamps in the database and trigger a redownload once a new sync task with local files present will be created?

Another option the comes into my mind would be to remove the files in the cloud and reupload them via sync task from the client but that would take even longer I guess?

I have a related problem. I use Putty to access my server and I have tried to stop the backup command (I use rsync or cp) from timing out. I have enabled the keepalive option in Putty. I can’t seem to find a way to do this (I don’t know exactly what is timing out, it could be something to do with my MS Windows laptop I guess or Putty or the server itself). I suppose one solution is to get a cheap screen and plug it straight into the server but I would prefer to use SSH. Does anyone have a simple answer as I need to backup my Nextcloud data to a removable disk. Perhaps this problem could be easier solved by using SSH on a linux laptop?

Yes it can be done like this, however these are then new files and retrigger a new download.

I’d try to use a webdav client (e.g. winscp) and try to copy directly via webdav. This all tracks the changes correctly in the database. If it is very slow, you might have potential to optimize the database, use redis for file locking etc.

So mounting both folders and moving? Ist it faster than uploading the local files via NC client to the new location?

Do both ways keep the file history (not important in my case but just curious)?

No, you just mount the Nextcloud storage. Within the Nextcloud storage, there should be the external storage you’ve added to Nextcloud.

If you do two independent connections to your Nextcloud and to some other webdav storage, you’ll remove the file from Nextcloud.

Ah I see, thx. I will try this out.

So I am going through the data folder by folder. The first part got more or less fine, now I run into a timeout on files with 3+ GB.

[no app in context] Warnung: GuzzleHttp\Exception\ConnectException: cURL error 28: Operation timed out after 30000 milliseconds with 0 bytes received (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for xxxxxxxxxxxxxxx/home/Photos/xxxxxxxxxxxxxxx/1B1A1009-1.tif at <<closure>>

As far as I understood Nextcloud throws an exception if a transaction to external storage takes longer than 30 seconds.

So I have two options, either keep the data directly in NC-Data and not in external storage or I move them out of NC or at least sync them with the external storage directly, not using NC.

So your connection transfers less than 100 MB/s (800 Mbit/s). Can’t you just raise the timeout limits a bit for the large files?

So the external storage is located on the same host not sure why it is actually that slow.

I do the transfer folder by folder. It worked fine for folders with less the 200 files and up to 20GB (per folder).
Now with more and larger files it is horrible. It seems WebDAV is resetting the connection if the whole process (not a single file) is taking to long:

“Sabre\HTTP\ClientException: OpenSSL SSL_connect: Connection reset by peer in connection to XXXXXXXXXX:5006”

In general it is much slower today. Unfortunately I can’t change the WebDAV config as I use Synologys WebDAV package. I tried now to sync files into the external storage via NC client. It feels faster, client shows upload speed between 2,5 and 5 MB/s. Though there is for sure a certain overhead for each file. So not the file size slows things down, but the amount of files.

So as there is no SMB support anymore, would be SFTP be worth a try?

If you haven’t optimized the database, and if you don’t use redis, this can be very slow, and by optimizing, it speeds up a lot (sure it won’t be as fast as a native copy between the storages).

Redis I have configured for distributed and locking:

‘memcache.local’ => ‘\\OC\\Memcache\\APCu’,
‘memcache.dsitributed’ => ‘\\OC\\Memcache\\Redis’,
‘memcache.locking’ => ‘\\OC\\Memcache\\Redis’,

I tried now an unsecured webDAV connection and it works way faster. I have also tried SFTP which had the same result as unsecured webDAV.

So SFTP would be the more secure option but as both Nextcloud and the WebDAV server are on the same machine, using the non secure should be fine (until an attacker gets on the machine but then there are bigger issues than the unsecure connection). Or is this wrong?

Are there other pros/cons for those two options?

I don’t know your configuration and the permissions. If it is local, you can just use a local external storage, where the webserver users needs writing permissions for that folder on your local machine. Then it doesn’t have to use any additional transport protocol.

In theory. Then you change the hostname or you get some DNS problem, and it tries for external connections without encryption. Perhaps with local ip (127.0.0.0/8) or something like that.

typo?

So I did some more tests.
The real bottleneck seems to be the webDAV connection from my PC to the cloud. If I move files via webUI it is way faster. also I tried SFTP which seems a bit faster than webDAV.

The 4GB file I tried again, it fails on webDAV (also on unsecure version) on SFTP it works but is also quite slow. Though I could bypass that issue by using psd instead of tiff files wich reduces the size by 50%.

Indeed a typo in my conf. Seems like redis was never working. And fixing the typo leads me to an internal server error. Seams I need to investigate here.

I will try the local version though I might just change my workflow a bit and keep the large files in the nextcloud storage.

So Cache was indeed broken, now that it is fixed, performance improved but I did some more tests without external storage and found out that the main issue now is not slow transfer speed. The actual up or download is fast, but file handling causes big issues.

So the upload of 3 big files (together around 5GB) was done in less than 2 minutes. 120 files with the same total size took more than half an hour. So I guess database is slow? Any ideas or starting points I can look into?

Yes!
But you can mount the external storage to your Nextcloud instance instead.

 
I always get this error message

    "CustomMessage": "cURL error 28: Operation timed out after 30000 milliseconds with 196231168 out of 277346299 bytes received (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://xxx.xxx.xxx.xxx/Musik/xxx.flac"

when synchronizing with the Nextcloud Android client a file that is mounted as external WebDAV storage of a Synology and is larger than 200 MB.

Does someone know how to fix it?