Cannot upload files 40MB or more: fails at 100%

My usual use of Nextcloud is:

  • auto upload photos/videos from camera on Android
  • at the end of the month, use Solid Explorer to find all the missed files (usually about 10%) and select them all and upload them

Server version is 15.0.5. Android app version is 3.6.0RC3 but I’m not using it for this.

I can’t use Nextcloud’s app to do this, because it duplicates existing files instead of skipping them, and doesn’t really have a good enough search/select functionality. So I need a proper file manager.

The trouble is, when I upload most videos (which are usually 40MB or more), they get to exactly 100% and then fail. Solid Explorer doesn’t give me a lot of information about this. The only thing I can find in the logs is this:

[webdav] Debug: Sabre\DAV\Exception\NotFound: File with name Shared Photos/2019/02/VID_20190217_150748.mp4 could not be located at <<closure>>

 0. /path/to/nc/3rdparty/sabre/dav/lib/DAV/CorePlugin.php line 81
    getNodeForPath("Shared Photos/2 ... 4")
 1. <<closure>>
    httpGet(Sabre\HTTP\Reque ... "}, Sabre\HTTP\Response {})
 2. /path/to/nc/3rdparty/sabre/event/lib/EventEmitterTrait.php line 105
    call_user_func_array([Sabre\DAV\CorePlugin {},"httpGet"], [Sabre\HTTP\Requ ... }])
 3. /path/to/nc/3rdparty/sabre/dav/lib/DAV/Server.php line 479
    emit("method:GET", [Sabre\HTTP\Requ ... }])
 4. /path/to/nc/3rdparty/sabre/dav/lib/DAV/CorePlugin.php line 253
    invokeMethod(Sabre\HTTP\Reque ... "}, Sabre\HTTP\Response {}, false)
 5. <<closure>>
    httpHead(Sabre\HTTP\Reque ... "}, Sabre\HTTP\Response {})
 6. /path/to/nc/3rdparty/sabre/event/lib/EventEmitterTrait.php line 105
    call_user_func_array([Sabre\DAV\CorePlugin {},"httpHead"], [Sabre\HTTP\Requ ... }])
 7. /path/to/nc/3rdparty/sabre/dav/lib/DAV/Server.php line 479
    emit("method:HEAD", [Sabre\HTTP\Requ ... }])
 8. /path/to/nc/3rdparty/sabre/dav/lib/DAV/Server.php line 254
    invokeMethod(Sabre\HTTP\Reque ... "}, Sabre\HTTP\Response {})
 9. /path/to/nc/apps/dav/appinfo/v1/webdav.php line 80
    exec()
10. /path/to/nc/remote.php line 163
    require_once("/path/to/ ... p")

HEAD /remote.php/webdav/Shared%20Photos/2019/02/VID_20190217_150748.mp4
from 27.33.9.230 by theuser at 2019-04-06T04:34:31+00:00

So my question is, how do I actually debug this? NC’s logs don’t show anything other than the above. Solid Explorer doesn’t say anything other than “SSL error”. It’s completely reproducible, it happens for the same set of files every time I try to upload them. It doesn’t happen if I upload them manually via the Nextcloud app, but that’s not practical because I don’t know which ones I need to upload until they fail in Solid Explorer. But SE can reliably upload smaller files easily. How do I get more info?

(I mean, ultimately what I want is any app or workflow that can take a folder of files or search results, and get them onto my NC server, no questions, no flakiness, just retry until they’re on the server. But I don’t think that exists yet.)

Hi detly,
what about max. upload size?
What if you maximize it to the maximal 2 GB?
Here I have no problem with syncing 40-70 GB via Total Commander.
It is no solution to avoid duplicates!
Greetings Soyo

Check /etc/php/7.X/apache2/php.ini
upload_max_filesize = 50M (set your filesize here)

What if you maximize it to the maximal 2 GB?

It’s already at that. Way above the 40-50MB size it consistently fails at. Plus I think that only affects uploads via web (PHP).

Did you try the upload via Total Commander?
Upload (WLAN) 737 MB via Android apps Nextcloud and Total Commander without problems here.
php.ini: upload-max-filesize in my case 2M
Try 10GB at Basic setting -> File handling (settings here)

BTW: My server is a RaspberryPI. So settings may be different at your side.

Good luck … soyo

I just thought I’d post the “answer” to this riddle, even though two and a half years have passed, because I hate when I stumble across a Google result for a problem I’m having and there’s no resolution.

It turns out that my ISP (TPG in Australia) had a kind of… odd… way of dealing with residential account holders who were using their maximum allocation of bandwidth. Rather than eg. shaping/slowing/applying QoS/whatever else, they would simply drop some or all currently open connections. Presumably they expected us to simply resume whatever was important, and anything not important would not be reconnected. Maybe? What actually happened though, was what literally every computer scientist and network engineer can predict: a kind of thundering herd problem when everyone’s syncing apps tried to reconnect and sync at the same time, triggering the same mechanism, over and over again.

I don’t know why this manifested as my file manager getting to “100%” and then flaking out, but (a) when I did this on better networks eg. work, it was fine and (b) when TPG changed their approach, this stopped happening. So if this is happening to you, maybe this information will help.

This topic was automatically closed after 2 days. New replies are no longer allowed.