Testing Large File Synchronization with Nextcloud AIO and NGINX Proxy — June 2025 Update

:magnifying_glass_tilted_left: Testing Large File Synchronization with Nextcloud AIO and NGINX Proxy — June 2025 Update

Date of test: June 11, 2025
Reason for testing:
Due to the ongoing reports on the forum regarding problems with syncing large files via Nextcloud, I performed another test with the latest versions of the stack. The goal was again to verify stability and behavior when syncing a file larger than 20 GB via the official desktop client.


:test_tube: Test Setup

  • Nextcloud AIO: v11.0.0

  • Nextcloud version: 31.0.6 RC2

  • Desktop client: Linux CachyOS - version 3.16.4 from the official repo

  • Server environment:

    • Platform: Proxmox virtual machine

    • OS: Ubuntu 24.04 LTS

    • Filesystem: XFS
      (Note: EXT4 is not recommended for large files based on my experience — XFS performs significantly better with large files.)

    • CPU: 12 cores

    • RAM: 12 GB

    • Disk: Kingston 3000 NVMe – 4 TB

    • Ballooning: enabled

      :brain: What is Ballooning?
      Ballooning is a memory management mechanism in virtualized environments that allows the hypervisor (e.g. Proxmox) to dynamically increase or decrease the amount of memory allocated to a VM.

      :white_check_mark: In this case, ballooning is an excellent safeguard against OOM (Out Of Memory) situations, which could otherwise cause service crashes when handling heavy tasks like uploading large files.

  • NGINX Proxy Manager: v2.12.3

  • Network:

    • Server: 1000/500 Mbit
    • Client: 1000 Mbit Ethernet connection

:gear: docker-compose.yml Configuration

volumes:
  nextcloud_aio_mastercontainer:
    external: true

services:
  nextcloud:
    #   image: nextcloud/all-in-one:latest
    image: ghcr.io/nextcloud-releases/all-in-one:latest
    restart: unless-stopped
    container_name: nextcloud-aio-mastercontainer
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /etc/cups/client.conf:/etc/cups/client.conf:ro
    ports:
      - 6789:8080  # Web UI of the mastercontainer
    environment:
      - APACHE_PORT=11000
      - NEXTCLOUD_MEMORY_LIMIT=4096M
      - NEXTCLOUD_ADDITIONAL_APKS=cups imagemagick

:memo: Notes:

  • The default memory_limit for Nextcloud AIO is 512 MB, which is insufficient when uploading large files.
    Therefore, the following environment variable was explicitly added:
NEXTCLOUD_MEMORY_LIMIT=4096M

This configuration greatly improves stability and reliability during large file transfers.

  • The setting:
NEXTCLOUD_ADDITIONAL_APKS=cups imagemagick

is used to install additional packages inside the Nextcloud AIO container:

  • cups: for enabling printing support from within Nextcloud (relevant if using print-related apps).
  • imagemagick: for extended image processing capabilities, e.g. advanced thumbnail generation and image manipulation.

:backhand_index_pointing_right: This setting does not affect large file synchronization in any way — it is included here because this is the current production configuration of the instance used for testing.


:globe_with_meridians: NGINX Proxy Settings (via NGINX Proxy Manager)

In the Advanced tab for the domain/subdomain:

client_body_buffer_size 512k;
proxy_read_timeout 86400s;
client_max_body_size 0;

:pushpin: Important:
The setting client_max_body_size 0; removes any upload size limit imposed by the reverse proxy, which is essential for handling large file uploads.

Screenshots:

Forward Hostname / IP - it is IP address of Nextcloud AIO server


:receipt: Test File

  • Name: saving.private.ryan.1080p.mkv
  • Size: 20.98 GB

:stopwatch: Sync Process

  1. Preparation phase:

    • After triggering the sync, it takes a few minutes (3-4) before the upload begins.
    • This delay is due to hashing, chunk preparation, and initial comparison with the server.
  2. Transfer phase:

    • Upload duration: ~7 minutes
  3. Finalization phase:

    • Once uploaded, server-side finalization (e.g. database write and file verification) took approximately 30 seconds.

:white_check_mark: Result

The 20.98 GB file was successfully synchronized without errors or interruptions.
Again, the combination of an increased memory limit and the XFS filesystem proved to be a reliable solution for syncing very large files in a Nextcloud AIO setup behind an NGINX Proxy.


:warning: Important Notes

  • It is strongly advised to always review the official release notes and issue tracker on GitHub — especially when working with large files or complex self-hosted environments (NGINX, VPN, encrypted storage, ZFS, etc.).
  • Using XFS and enabling ballooning is strongly recommended for these scenarios based on my multi-year experience.
1 Like

I don’t think ballooning is strictly necessary if a reasonable amount of RAM has already been allocated and the system is otherwise well-configured. That said, it can be useful in terms of raw performance optimization, particularly if you want to tune PHP and the database to make more aggressive use of memory. This approach can of course performance benefits, but imho it’s more of a brute-force tactic than a surgical fix. :wink:

When it comes to the file system, its impact on performance in the context of Nextcloud should be minimal. While XFS can offer certain advantages, particularly when working with large files or very large storage volumes, these benefits are usually marginal for typical Nextcloud workloads, especially in home or small-office environments, and also for non-typical Nextcloud workloads like hosting Linux-ISOs :wink:

Even with EXT4, you shouldn’t encounter issues like upload failures or out-of-memory situations, even when uploading very large files, unless there’s a deeper issue at play. In such cases, it’s probably better to address those root causes rather than switching file systems.

Glad for your comment — I fully understand your point and I actually agree in principle.

Of course, ballooning is not strictly required if the VM is configured properly and has enough physical RAM allocated. However, I want to add a bit of context why I specifically mentioned it in my post:

For almost a full year I was fighting with OOM kills when trying to upload large files (> 2–4 GB), where the entire Nextcloud VM would crash hard during the transfer. This happened even though the VM had 12 GB of RAM assigned — but Proxmox ballooning was disabled at that time. I also tried various cron scripts to periodically free memory, but those were unreliable and didn’t resolve the issue.

It was a very frustrating situation because I couldn’t find any clear guidance on this back then — many forum posts were inconclusive or unrelated. Enabling ballooning in Proxmox was the one change that stabilized the situation immediately in combination with tuning NEXTCLOUD_MEMORY_LIMIT. Since then, I have had zero OOM kills and all large file sync tests have been stable.

I completely agree that this is not necessarily the only solution — there are definitely more “surgical” ways to tune memory management, PHP, and DB settings — but the goal of my article was to document a practical configuration that works well in production, in case someone is looking for a proven recipe and wants to avoid months of trial and error (as I had to go through).

Same goes for XFS — it is absolutely possible to use EXT4 successfully, but in my testing (and based on some others’ reports), XFS handles very large files and large storage volumes a bit more gracefully — again, this is simply based on personal experience.

In short — I’m not claiming that everyone must use this exact setup, but it is one that is now stable and works reliably for me, and if this helps even one person struggling with similar problems, I’ll be happy. :slightly_smiling_face:

1 Like

In this context, it would be interesting to know how much memory is used when uploading large files, and more importantly, whether RAM usage increases the longer the upload takes. Ideally, the latter shouldn’t happen, or at least, memory usage shouldn’t increase significantly during the upload process.

That said, enabling ballooning can of course still make sense, also to keep more RAM available for other VMs when it’s not in use. However OOM kills definitly shouldn’t happen, with 12GB of RAM, at least not if there are not many other users doing memory intensive things on the server at the same time, and ideally not even then. :wink:

That’s fair, and yeah, I guess there’s nothing wrong with using XFS. After all, it’s still the default filesystem in RHEL. :slight_smile:

2 Likes

Btw. I just tested by uploading a 36GB Linux ISO via desktop client on my my instance, (these Linux ISOs are getting bigger and bigger :wink: ) Apache2, PHP-FPM, MariaDB, in a Proxmox VM with 8GB of RAM (no balloning), ext4 storage.

I more or less use the configs from here: Nextcloud Installationsanleitung (Apache Fast Track) - Carsten Rieger

What has made a big difference for me, by the way, is to change the PHP-FPM process manager setting from its default pm = dynamic to pm = ondemand. Before I changed that, I also had crashes during uploads and the memory was always fully utilized, even when uploading relatively small files like photos.

This how it looked while I was uploading the file mentioned above :

htop

2 Likes

Now that I’ve read your post, it might be worth considering writing a similar guide with your configuration, settings + recommendations for beginners.

You know very well that there are relatively few good complete guides.

And this is also the reason why I try to share my experiences in my guides.

I don’t think there’s much I could add to Carsten Rieger’s guides, as they’re already quite comprehensive. His main guide and installation script are based on Nginx, but the PHP-FPM and database configurations are more or less identical to those used in his Apache guide.

I personally prefer Apache because it’s the web server recommended by Nextcloud, and also because I’m more familiar with it.

For this reason, I maintain my own script, which is, let’s say, heavily inspired by Carsten Rieger’s. :wink: I’ve also added or modified a few things, borrowing elements from the Nextcloud VM and Nextcloud AIO, as well as some of my own Bash aliases and helper scripts.

That said, some of these additions, especially the extra helper scripts, aren’t necessarily universally applicable, which, along with the fact that 90% of the code is basically “borrowed” from other scripts, is also part of what’s holding me back from making my scripts public. :wink:

I think it would definitely be worth sharing — it doesn’t have to be some kind of “official package”, more like an inspiration for others.
Your own tweaks and helper scripts are exactly the kind of thing that’s often missing from the more generic guides.

And the fact that it’s based on bits and pieces from different sources?
That’s exactly how it should be — take the best parts and fine-tune them to fit your needs. I’m sure many people (including me :grinning_face_with_smiling_eyes:) would really appreciate seeing what you’ve come up with. :wink:

Yeah but there are a few issues with my helper scripts. I use git, but I don’t actually know git, so I just commit changes, and they might break things. So if somebody would install my helper scripts, after I just commited a bad change, those helper scripts wouldn’t actually help them. :wink:

But yeah, I’ve thought a few times about publishing just the install script, or maybe just a wiki with code snippets or instructions from some of my helper scripts. But like I said, people like Carsten Rieger are way more knowledgeable than I am. And when it comes to helper scripts, @ernolf has already done a much better, and more importantly, a much more universally applicable job on that. And @DecaTec even published a book about Nextcloud. So honestly, I think pretty much everything people could possibly need is already out there.

Also, what works for me might not work for you. It was really just a coincidence that the upload worked as it did, I have never tried uploading such large files to my Nextcloud before. For that kind of thing, I use TrueNAS with ZFS and dedicated software that makes those Linux ISOs available to the devices that actually consume them. :wink:

By the way, manual or “bare metal” installations, or however you want to call them, seem to be on the decline. These days, people seem to focus on container technologies, as well as modern web servers and reverse proxies such as Traefik and Caddy. That’s also why I’d rather spend my time learning these tools than recycling guides that already exist. So maybe in the medium term, I might be able to get my infrastructure to a point where not every OS update feels like an adventure. Application containers offer a significant advantage in that regard. :wink:

That said, I still have a few things to learn before I migrate my instance over to Docker images or the AIO. And honestly, there’s still a chance that might never happen. :wink:

I mean, what if one or both of those projects suddenly stops being maintained or disappears altogether? Sure, I could then build and maintain my own containers, but that would almost certainly be more work than just running Netxloud “bare metal” on a Debian or Ubuntu instance and upgrading the OS every two to or four years, even if there’s a bit of friction during those upgrades.

Anyway, we’ll see what the future brings. Maybe I’ll even write a guide at some point! :wink:

I understand your point — I had similar doubts myself. But from my own experience, once I gradually moved to Docker-based setups, maintaining them turned out to be quite simple — especially if things are configured properly from the start.

As for Nextcloud AIO — it’s an official project maintained by Nextcloud itself, so there’s no real reason to worry about support disappearing anytime soon. On the contrary, the community around AIO is very active.

I’m glad I got to learn more about your approach — insights like that always help me refine my own thinking.

Anyway, I think we’re getting a bit off-topic now, so I’ll leave it here from my side.

1 Like