Error syncing large files

  • Nextcloud Server version (e.g., 29.x.x):
    • Nextcloud Hub 9 (30.0.6)
  • Operating system and version (e.g., Ubuntu 24.04):
    • Linux 6.6.42-060642-generic #202407250837 SMP PREEMPT_DYNAMIC Thu Jul 25 08:48:30 UTC 2024 x86_64
  • Web server and version (e.g, Apache 2.4.25):
    • Apache/2.4.63 (Unix) (fpm-fcgi)
  • PHP version (e.g, 8.3):
    • 8.3.17
  • Is this the first time you’ve seen this error? (Yes / No):
    • yes
  • When did this problem seem to first start?
    • as soon I started syncing large file
  • Installation method (e.g. AlO, NCP, Bare Metal/Archive, etc.)
    • AIO
  • Are you using CloudfIare, mod_security, or similar? (Yes / No)
    • no

Summary of the issue you are facing:

I try to sync large file (ca. 3GB) from local Mac client to remote NC.
It starts syncing and within seconds the progress bar is around 1.5GB (I am on a very fast Internet connection but that speed is unrealistic, that is probably an error too).
Then, at some point it stops and Mac client tells me (Filename) connection closed with the error shown below:

Steps to replicate it (hint: details matter!):

I just put the file in my local client sync folder, works for other files (smaller).

Log entries

Nextcloud

{"reqId":"HFmj7IrHStAjmxJPlo6j","level":3,"time":"2025-03-23T17:03:16+00:00","remoteAddr":"127.0.0.1","user":"--","app":"no app in context","method":"GET","url":"/login","message":"Could not decrypt or decode encrypted session data","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36","version":"30.0.6.2","exception":{"Exception":"Exception","Message":"HMAC does not match.","Code":0,"Trace":[{"file":"/var/www/html/lib/private/Security/Crypto.php","line":98,"function":"decryptWithoutSecret","class":"OC\\Security\\Crypto","type":"->","args":["*** sensitive parameters replaced ***"]},{"file":"/var/www/html/lib/private/Session/CryptoSessionData.php","line":70,"function":"decrypt","class":"OC\\Security\\Crypto","type":"->","args":["*** sensitive parameters replaced ***"]},{"file":"/var/www/html/lib/private/Session/CryptoSessionData.php","line":47,"function":"initializeSession","class":"OC\\Session\\CryptoSessionData","type":"->","args":[]},{"file":"/var/www/html/lib/private/Session/CryptoWrapper.php","line":94,"function":"__construct","class":"OC\\Session\\CryptoSessionData","type":"->","args":[{"__class__":"OC\\Session\\Internal"},{"__class__":"OC\\Security\\Crypto"},"*** sensitive parameters replaced ***"]},{"file":"/var/www/html/lib/base.php","line":402,"function":"wrapSession","class":"OC\\Session\\CryptoWrapper","type":"->","args":[{"__class__":"OC\\Session\\Internal"}]},{"file":"/var/www/html/lib/base.php","line":664,"function":"initSession","class":"OC","type":"::","args":[]},{"file":"/var/www/html/lib/base.php","line":1134,"function":"init","class":"OC","type":"::","args":[]},{"file":"/var/www/html/index.php","line":22,"args":["/var/www/html/lib/base.php"],"function":"require_once"}],"File":"/var/www/html/lib/private/Security/Crypto.php","Line":162,"message":"Could not decrypt or decode encrypted session data","exception":[],"CustomMessage":"Could not decrypt or decode encrypted session data"},"id":"67e14956e39e5"}

This error is the only one in the remote logs, totally unrelated, and, on top, I do NOT have encryption active! I guess this is totally unrelated.


I am fairly sure this is “just” a setting on the server I have to change, since it says that connection was dropped, I have to alter some setting to keep server connection open for longer?
Can anyone please guide me in what and how to change?

Thank you!

Hi,
I’ve dealt with a very similar issue using Nextcloud AIO, so here are a few suggestions and questions that might help identify the root cause:

  1. What are the hardware specs of your server?

    • Specifically: CPU, total RAM, and disk type (HDD, SSD, NVMe)?
    • Syncing large files (2 GB and above) through the desktop client can sometimes cause RAM exhaustion and lead to OOM kills — especially if you’re running AIO inside a virtual machine with limited memory.
  2. Are you running AIO inside a virtual machine?

    • If so, what virtualization platform are you using (e.g., Proxmox, VirtualBox…)?
    • In my case (Proxmox), I had to adjust ballooning settings and assign more fixed RAM and SWAP to the VM to prevent crashes during large file transfers.
  3. You mentioned using the Mac desktop client — what version of macOS and Nextcloud client are you using?

    • The macOS client has known issues with large file uploads, especially if the system goes into sleep mode or if there’s a brief connectivity interruption.
    • You can check the client logs and look for repeated upload or connection errors during the time the issue occurred.
  4. Do you have access to any logs from the server side?

    • For example:
      docker logs nextcloud-aio-nextcloud  
      journalctl -xe  
      dmesg  
      
    • These logs might show memory-related errors or failed uploads due to disk or proxy limitations.

What helped in my case (Nextcloud AIO, 12-core CPU, 16 GB RAM):

  • Increasing RAM + enabling swap in the VM
  • Fine-tuning PHP upload limits inside AIO (via admin panel)
  • Ensuring the reverse proxy (e.g., NGINX) was not limiting upload size (client_max_body_size, etc.)

Can you share more details about your server hardware and environment?
AIO is great, but it’s definitely sensitive to memory and disk performance under heavier workloads.

Hi, thanks for the reply.

What are the hardware specs of your server?

5 Cores, 10 Threads Intel Xeon, 10 GB RAM, 100 GB On-Board SSD Drive.
VPS server (so, yes not bare metal) on LXD

You mentioned using the Mac desktop client — what version of macOS and Nextcloud client are you using?

Up to date (Mac actualizes these automatically whenever updates are available). Although, it gives me error when trying to fetch updates right now!


This is what I have installed.

Sleep is not the issue, I am actively working in the machine while that happens

Do you have access to any logs from the server side?

Tons of these:
Mar 24 13:54:11 inubes.app udisksd[3580446]: Error statting none: No such file or directory

Some of these:
Mar 24 13:53:11 inubes.app sshd[4122064]: Connection reset by authenticating user admin 92.255.85.107 port 53068 [preauth]

When upload starts, I get these additionally:

Mar 24 14:00:09 inubes.app systemd[1]: run-docker-runtime\x2drunc-moby-9b92a5cf568d95a36b4bd438c849cd2bc8389e11c2d701eb84b49df792b3eba3-runc.j6PC1B.mount: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit run-docker-runtime\x2drunc-moby-9b92a5cf568d95a36b4bd438c849cd2bc8389e11c2d701eb84b49df792b3eba3-runc.j6PC1B.mount has successfully entered the 'dead' state.
  • Fine-tuning PHP upload limits inside AIO (via admin panel)

Where do I do that? I do not appear to have such setting in the admin. I found this I can't find where is the upload max file size on nextcloud - #3 by Horia_Costache, just want to confirm that is what I have to do? (So, as constant, not admin setting?)
Also, I already have Upload max size: 16 GB, that should be enough for a 3gb file?

As for NGINX proxy, since this is AIO, I do not have anything else but AIO handling everything. (used the convenience script Run the command below in order to start the container on Linux and without a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else) already in place:)

1. LXD and Its Limitations

You mentioned that you’re running Nextcloud AIO in a VPS container managed by LXD. Just to clarify: LXD is a container hypervisor developed by Canonical, acting as a management layer on top of LXC.
While LXD offers convenient management and isolation features, it does not provide a full virtual machine environment — it runs on the host’s shared kernel, which introduces some technical limitations that can affect applications like Nextcloud.

Common limitations with LXD containers include:

  • restricted or limited access to features like fuse (e.g., for WebDAV mounts),
  • potential disk I/O issues during large file transfers (latency, drops, or instability),
  • resource limitations through cgroups (RAM, CPU, number of processes, I/O bandwidth),
  • and generally lower reliability for heavy or long-running operations like syncing large files.

2. My Setup – Proxmox VM with Ubuntu Server 24.04

I’m running Nextcloud AIO inside a virtual machine on Proxmox, using a full Ubuntu Server 24.04 installation (not a container, not LXD).
This is a proper VM with its own kernel, full hardware access, and fully configurable resources (RAM, CPU, swap, disk I/O).

In this setup, I’ve tested syncing large files (10+ GB) without any issues — no crashes, no connection drops, no client-side failures.


3. macOS Desktop Client – Outdated Version

You mentioned you’re using the Nextcloud desktop client version 3.14.0 on macOS.
As of today, the official Nextcloud website offers version 3.16.2 for macOS, so you’re running an older version that might still contain known bugs or limitations.

The macOS desktop client is known to be less stable when handling large files, especially when minor connection hiccups or timeouts occur.
The 499 Client Closed Request error you’re seeing typically means that the client terminated the connection prematurely, which is a common symptom of such issues.


4. Suggestions to Help Isolate the Problem

To narrow things down and identify the real cause, I recommend:

  • Try syncing the same file from a different OS, e.g., Linux or Windows, using the latest Nextcloud client.
  • If possible, run AIO in a full VM environment (KVM, VirtualBox, or Proxmox) with a proper Linux server (not a container with shared kernel).
  • Monitor system resources during the sync (e.g., with htop, iotop, and memory usage).

From my perspective, the issue most likely lies in the combination of LXD container limitations and an outdated version of the macOS desktop client, which already has known problems with large file transfers.

Nextcloud AIO is a powerful solution, but to run reliably — especially for large file operations — it requires a stable and fully controlled environment, ideally a full Linux server or virtual machine, not a shared-host container setup.

You can install the newest version of the desktop client (3.16.2) an try again.

I updated the local app and things look differently now:
It does much more realistic prognostic about upload time left (2.5 hours instead of few seconds for half the file and then crash)

So, I will come back here in 2.5 hours to confirm or deny success :slight_smile:


About LXD: I think it was the only available at that time on the VPS, although… weird now that you mention it I thought they had already KVM offered at the time, yet clearly, it is not what I used :frowning:
Now sure Proxmox will change anything in that regard, as this is one of the few things I do not run locally - it is a hosted VPS that I pay for, because of the relatively hungry requirements and I use it for video calls a lot, so I cannot afford self-local hosted speed issues (I use it for calling with clients etc)

I will see if I can find the time to re-install it all using KVM virtualization instead, which as far I understand would be less problematic(?)

I understand that stability and connection speed are crucial when working with clients. However, I’d like to offer my perspective – I host everything myself at home, on dedicated hardware, and I’ve had great experience with it. Here are a few reasons why I chose this route:

  1. Proxmox Server at Home
    I run a Proxmox server with 5 Ubuntu Server VMs. This setup gives me full flexibility and control. My services are separated and include:

    • Bitwarden (self-hosted password manager)
    • Plex Server (for multimedia streaming)
    • WordPress (personal and project websites)
    • Home Assistant (home automation)
    • Nextcloud AIO (file sync and sharing)
    • Discourse (community forum)
    • Stirling PDF (PDF management)
    • Reubah (self-hosted image procesor)
    • Calibre Web (eBook library)
    • Audiobookshelf (audiobook management and playback)
    • other selhosted apps
  2. Much More Cost-Effective
    If I had to pay for each of these as a separate VPS, the cost would add up very quickly. The return on investment for my own hardware shortens significantly with each additional self-hosted service.

  3. Lower Entry Cost Than It Seems
    Hardware today isn’t that expensive. A 12-core CPU with 16 GB of RAM and decent storage can cost less than an average MacBook, while offering far more value in terms of what it can do.

  4. Fast and Stable Internet Connection
    I have a 500 Mbit/s upload speed at home, which is more than enough even for syncing large files via Nextcloud or streaming media remotely. That was one of the key enablers for me.

  5. Full Control and Customization
    I’m in charge of everything – software versions, performance tuning, backups, monitoring… If something breaks, I can investigate and fix it directly, without relying on external support or provider limitations.

Of course, as I mentioned earlier, there are real-world constraints – like the availability of fast internet – which might make home hosting impractical for some. But in my case, with a stable home setup, I find self-hosting to be not only more flexible but also significantly cheaper in the long run compared to renting high-performance VPS infrastructure.

That is what is missing in many parts of the world :frowning:

Even with the best offer down here (and that is Starlink) I reach only about 30 Mbit/s. Enough for something that is not required to be snappy as hell (In fact I run several websites from here, amongst other services)
But for video call that is just not enough (it is just enough to actually get an OK connection but to also serve the other side wouldn’t work - I think)
What I mean is: if the video call is hosted on a VPS with adequate speeds, I only need to use my connection to send my voice/video to that VPS, not also the other side’s voice/video. If I where to self host this, bandwidth would be double-usage) :expressionless:

Anyway - the data synced! It was the outdated local desktop app.
Problem solved!

1 Like