Sabre\Dav\Exception datadir on external storage

Hi,

nextcloud version AIO v6.2.1 with nexytcloud 27.0.1RC1
Server os: ubuntu 22.04
browser: chrome
encryption: either gocryptfs without serverside encryption module or serside encryption module without gocryptfs but also happens without encryption but little less often
nextcloud data dir on external cifs storage
nextcloud memory limit =1024M
nextcloud upload limit= 16G

error:

Sabre\DAV\Exception\BadRequest: Expected filesize of 35358091 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) 14827520 bytes. Could either be a network problem on the sending side or a problem writing to the storage on the server side.

steps to reproduce:

enable encrytion, have external storage cifs share and upload big files


i do get the following error on nextcloud AIO 27.0.1 and 27.0.0 while using nextcloud encryption module or grocryptfs without encryption module and having the nextcloud data dir on an external cifs share.

Files less than 100MB are fine but beyond a couple of GB i always get:

I use the browser uploader and it works without any encryption

Chrome

Any idea why?

This github issue is already present and doesnt give a solution

Hi, are you running AIO behind a reverse proxy?

no, running on a clean server as delivered. only modified nextcloud datadir and memory limit

could network speed changes between server and external storage be any cause? i never had any breaks and the connection itself is peefectly stable but might have some changes in up/download/write/read speed

Possibly the cifs share has a timeout which is not high enough? I would recommend a timeout of 3600s.

Also, do you maybe use Cloudflare Proxy?

no cloudflare and timeout is already 3600

ah wait you mean the cifs share never set this. how can i do this?

/etc/fstab

//xxx.your-storagebox.de/xxx-sub1 /mnt/nextcloud_ext cifs credentials=/root/.smbcredentials,iocharset=utf8,rw,_netdev,uid=33,gid=0,file_mode=0660,dir_mode=0770 0 0

Yes, you can google this as I dont know how to set the tineout either.

Afaik there is no timeout=n option for cifs but echo_interval=n which is by default set to 60 seconds, which results in a timeout of 120 seconds:

       echo_interval=n
              sets the interval at which echo requests are sent to the
              server on an idling connection. This  setting  also  af‐
              fects the time required for a connection to an unrespon‐
              sive server to timeout. Here n is the echo  interval  in
              seconds.  The reconnection happens at twice the value of
              the echo_interval set for an  unresponsive  server.   If
              this  option  is  not given then the default value of 60
              seconds is used.  The minimum tunable value is 1  second
              and maximum can go up to 600 seconds.

the first i did was cache=non as remommended by hetzner for large files, though i do not really understand why this should be a solution cause the tmp files are only a few mb. though i had no problems during the last 3 attempts. i will let u know if it changes

1 Like

did not solve the problem, but made it less likely, got the same error again and unfortuanetely now i do not even get any mor elog results but only “Unknown Error in webpanel” but no information in logging

edit: its still logging events but not this specific unknown error.

but got a few additional Sabre\DAV\Exception\BadRequest: Expected filesize of 10485760 bytes but read (from Nextcloud client) and wrote (to Nextcloud storage) 3153920 bytes. Could either be a network problem on the sending side or a problem writing to the storage on the server side. meanwhile.

i also increased the timeout