Increase max filesize & timeout in nextcloud docker container on Windows

Hello!
I’m currently running into issues uploading large folders / files to my nextcloud instances. FIY, I’m running the latest nextcloud instance in a docker container, with a few other containers (nginx reverse proxy, redis, etc) set up with docker compose. I heavily apologize if my question isn’t completely new, but I’m incredibly frustrated and can’t find a solution (I’m new to this whole thing, my network programming knowledge isn’t great, and I’ve never really delved into Linux).
The problem is that after a while the upload just fails, and I figured I have to increase my max file size and timeout options. Google suggests that I have to introduce php configs such as this:

php_value upload_max_filesize 16G
php_value post_max_size 16G

If I understand it correctly, I have to create a file named php.ini and write all my php config values in there. I’ve found the path /usr/local/etc/php/php.ini for that in other forum comments. Now here comes the annyoing (and propably dumb on my part) problem: How do I create that file? In docker I can follow that path (in the nextcloud container → files), but I can’t create files there, only modify it. Since it’s a linux path, I couldn’t find the file on my windows computer. What does that path map to on a windows machine? I also opened a linux shell (that’s using WSL right?) and tried to navigate to that location, but the shell didn’t know the path. What am I missing here? I feel like I don’t at all understand how docker and WSL works. Where do I put this file?

Hello,

You’ll need to edit the nginx configuration file to adjust the default of 512M to your new limit, also consider changing the buffer size to handle large files. This is going to depend on your nginx install but should be editable by an editor like nano. After that file is modified you’ll restart nginx.

This could be restricting large file uploads by default.

I’m relatively new to NC and sometimes the “where are things” are among the most confusing - and there are different installation types, components, etc.

NextCloud has documented uploading larger files but IMHO does not always provide the exact location of where to find the files mentioned, which can create some configuration delays.

Uploading big files > 512MB — Nextcloud latest Administration Manual latest documentation

With docker did you install NC via the AIO installer or another method?

Generally the default NC installation (AIO) creates a number of required containers on a single bridged network and publishes specific ports to the host NIC - public facing is generally port 443 on the Apache container (nextcloud-aio-apache in my environment) which is running a Caddy proxy if you installed collabora as part of the install).

If you are running ngnix it’s going to be accepting port 443 for inbound traffic, likely your apache instance is publishing port 11000 which ngnix is forwarding traffic to.

The other ports from the NC mastercontainer, etc. are the administration side of the house while talk may have a port presented for access to clients as well.

You should be publishing 443 (ngnix) and 3478 (talk) to the internet with port 8080 facing your “LAN side” for management of the NC environment.

2 Likes

How are you uploading? From the Web client? One of the official clients? A generic WebDaV client? Etc.

Also, which image are you using? Most of them have these parameter already reasonably set and also expose them via environment variables so there is zero need to adjust the PHP config manually.

Refer to the docs for the image you’re using. If stuck, post your Compose file. (Also, please fill out the support template).

The problem is that after a while the upload just fails, and I figured I have to increase my max file size and timeout options

What specifically happens?

And:

  • What appears in your browser inspector under the Console tab (and Network) when attempting an upload?

  • What is in your Nextcloud log? Your reverse proxy error log? Web server error log? Etc.

I’m uploading from the browser, Firefox in my case, with my nextcloud domain opened. I now also downloaded the Desktop client, but I’m not sure if I’m missing something, can you also upload something from the client without syncing it with the server? I have a massive (500gb) folder that I would like to upload, and I don’t have the space on my local ssd to copy it to the sync folder. I don’t really wanna risk cutting and pasting it tho.

Anyways. I’m using the nextcloud:latest docker image. Here is my docker compose:

---
version: '3'
name: DockerServer 
services:
  nextcloud:
    image: nextcloud   (i named the image nextcloud beforehand, but it's just nextcloud:latest
    container_name: nextcloud
    restart: unless-stopped
    networks: 
      - cloud
    depends_on:
      - nextclouddb
      - redis
    ports:
      - 8081:80
    volumes:
      - ./nextcloud/html:/var/www/html
      - ./nextcloud/custom_apps:/var/www/html/custom_apps
      - ./nextcloud/config:/var/www/html/config
      - ./nextcloud/data:/var/www/html/data
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=****
      - MYSQL_HOST=nextclouddb
      - REDIS_HOST=redis

  nextclouddb:
    image: mariadb
    container_name: nextcloud-db
    restart: unless-stopped
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    networks: 
      - cloud
    volumes:
      - ./nextclouddb:/var/lib/mysql
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - MYSQL_RANDOM_ROOT_PASSWORD=true
      - MYSQL_PASSWORD=****
      - MYSQL_DATABASE=nextcloud
      - MYSQL_USER=nextcloud
      
  collabora:
    image: collabora/code
    container_name: collabora
    restart: unless-stopped
    networks: 
      - cloud
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - password=****
      - username=nextcloud
      - domain=*****************
      - extra_params=--o:ssl.enable=true
    ports:
      - 9980:9980

  redis:
    image: redis:alpine
    container_name: redis
    volumes:
      - ./redis:/data/redis  
    networks: 
      - cloud
  
  nginx-proxy:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: nginx-proxy
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./nginx/data:/data
      - ./nginx/letsencrypt:/etc/letsencrypt

networks:
  cloud:
    name: cloud
    driver: bridge

The only modified config file is that of nextcloud, and I only set it to read-only and added trusted domains.

I also found out the error when uploading, but I’m not sure why it happens. Contrary to what I thought, the file size limit isn’t actually the problem. The files are up to 700mb big individually, and I already uploaded 13gb before the upload crashed. There might still be a upload timeout issue after a while, I haven’t gotten far enough to test that. Anyways, the upload crashes because the client ram fills up. It has 16gb, but after uploading a few gb ram usage reached 95%, with ~11gb being used up by firefox. Browser console also threw a out of memory error. Does anybody know why this might be happening? If not, I will open a seperate topic for that. I also wanted prepared screenshots of the browser console, but they get wiped out since my client pc completely freaks out and does a reboot when the upload crashes for good.

Still, I would like to come back to the original question. Is there any way to actually modify the config files outside of docker on a windows machine? Any way to find out how the linux paths are mapped onto the drive? Regardless of what I want to modify, be it nextcloud itself or redis or nginx, I still run into that problem.