Nextcloud version (eg, 29.0.5): Nextcloud Hub 8 (29.0.8)
Operating system and version (eg, Ubuntu 24.04): Docker AIO
Nextcloud Desktop Client: 3.14.2
The setup:
I use the Nextcloud AIO Docker image on an installation of TrueNAS Scale 24.10. However, I am using docker compose directly, and did not deploy it via the provided apps of TrueNAS. I enabled the following extra options in my compose file.
- NEXTCLOUD_UPLOAD_LIMIT=1000G
- NEXTCLOUD_MAX_TIME=86400
- NEXTCLOUD_MEMORY_LIMIT=4096M
Within Nextcloud I setup external storages to my TrueNAS smb shares and do not store any data in Nextcloud directly. I sync the folders using the desktop client to my macbook with support for virtual files enabled.
The issue you are facing:
When uploading larger files via the desktop client I see two different errors. I tested the issue with a Windows 11 ISO file with a size of 6,8GB. With the same file one of the two errors occurs every time.
They both begin the same way, I drop the file in macOS in a folder that has virtual files enabled and is synced to my Nextcloud instance using the sync client. The icon in finder next to the file, that indicates the current sync status, then shows an empty circle which starts filling. I can also see corresponding network traffic on my docker host during that time. However not in the stats of portainer when looking at the nextcloud-aio-nextcloud container. As soon as the circle is full, I can see network traffic in the stats of said container. I can also see the file arriving on my smb share at that time. Nextcloud creates a .ocTransferId.part file there which is copied from the tmp dir (I guess?) of Nextcloud to the share.
Then one of the following happens:
- File is not copied completely
In that case I get this log entry:
[no app in context] Fehler: Erwartete Dateigröße von 6800478208 bytes, aber 4511883264 bytes gelesen (vom Nextcloud-Client) und geschrieben (in den Nextcloud-Speicher). Dies kann entweder ein Netzwerkproblem auf der sendenden Seite oder ein Problem beim Schreiben in den Speicher auf der Serverseite sein.
PUT /remote.php/dav/files/dylex/H%C3%B6rb%C3%BCcher/Win11_23H2_German_x64v5.iso
von 192.168.2.159 von dylex um 03.11.2024, 20:30:12
After that .part file is deleted and the sync restarts (circle in finder starts filling again)
- File copied completely, but can not be processed
In that case the .part file matches the size of the original file but it fails with an out of memory exception. In the stats of the nextcloud-aio-nextcloud container image in portainer I can also see a spike in RAM usage for that time (see attached image second red box)
[PHP] Fehler: Allowed memory size of 4294967296 bytes exhausted (tried to allocate 2147221536 bytes) at /var/www/html/lib/private/AppFramework/Http/Request.php#440
PUT /remote.php/dav/files/dylex/H%C3%B6rb%C3%BCcher/Win11_23H2_German_x64v5.iso
von 192.168.2.159 von dylex um 03.11.2024, 20:33:01
After that .part file is NOT deleted and the sync restarts (circle in finder starts filling again)
I checked other issues regarding big file uploads but did not find something that helped me. Also I am a bit lost where to start. Is it intended behavior that I am seeing, that the max file size of uploads depends on the max available RAM? I thought this would only apply to uploads from the browser and not the desktop client because it uses chunking. Or is something going wrong here?
Any help will be greatly appreciated. If more information is necessary I am happy to provide that. Thanks a lot in advance.