Files uploaded by WEBDAV are 0 sized in DB

Nextcloud version : 19.0.0
Operating system and version: UBUNTU 18.04.4 LTS (Bionic Beaver)
Apache: 2.4.29
PHP version: 7.2.24

The issue you are facing:

We used to use webdav client (Raidrive, Windows) to access files on nextcloud trough drive Letter (eg. Z:). We have updated from 18.0.5 to 19.0.0 and problems occures. All files copied via connected disk drive are 0 size. The 0 appears although in webinterface.
I have opened oc_filecache table and there is also 0 for files copied via webdav. The size was not updated. On nextcloud physical storage the size of files are ok and also the content is ok. The problem probably is in updating of record in oc_filecache if webdav interface is used.
Web interface does not have this bug. There is all ok.

Is there any solution to solve size updating if webdav is used for copying files to nextcloud?
Where is the place in source code to do any change?

Is this the first time you’ve seen this error?: Yes

Steps to replicate it:

  1. Run WebDAV client and create connection to NC instance
  2. Copy some files to NC
  3. Files have zero size, the 0 is in also in oc_filecache
  4. The physical storage show the proper size of files and content is OK

The output of your Nextcloud log in Admin > Logging:
Nothing relevant. Also any errors if log is recreated.

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):
Nothing relevant, changes weren’t changed before and after update.

The output of your Apache/nginx/system log in /var/log/____:
Nothing relevant. The webdav properly copies files to server, but after that the oc_filecache is not properly updated.

Any solution?

I have exactly the same problem: / is there any progress ??

have you guys searched the forum for possible solutions?

like maybe this one

It is not the same. It is another problem. Files are properly copied but size is not updated in DB. After running occ files:scan the file size is updated to proper size.

this is the solution. just add a cronjob running all x minutes and scanning for new files.
usually that only is neccessary if you have stored your uploaded files on some external storage.

It is not the solution. Try to scan 4,5TB used storage from 10TB. It takes 5 hours. The cloud is enough busy to run scanner. Every couple of minutes sometimes seconds from 5 to 100 files are added. After that other people need to work with files. It is crazy to start scanner every x minutes.
It is impossible to monitor for new files, whose are added to 300 different folder randomly.

2 Likes

This is not a solution. High I/O load during every scan and it can take hours depending on your content.

1 Like

well why not trying app inotify? first and then a small routine for scanning only affected folders?

Only affected folders? Anything uploaded via webdav has this problem. And if you have a lot of end users it’s going to be completely random.

You would have to set a cronjob to run a file scan every couple of minutes and let all of your end users know what’s going on.

I personally have over 40TB of content so that’s not going to end well. Will take a very long time to scan.

There is no winning here. It needs to be fixed :confused:

1 Like

i dunno if “it” can be fixed. But I know devs would be happy if you’d chime in to help them… You’re most welcome there.

Where do I create a bounty for nextcloud bugs? Unreal that this bug made it into production on NC19 but mistakes do happen.

Don't use bountysource anymore!

Haha the one time I was going to use it. I’ll wait for a fix patiently but yeah the whole file scan thing isn’t a solution for this IMO.

Thank you for taking your time replying though! Appreciated.

1 Like

yesterday on the test server I updated the system to the daily version - and it looks like the problem has been solved.
Today I will test it as soon as I open my eyes wider :slight_smile:

maybe you wanna take a look into the manual about the scan -function?

meaning: you could just scan for unscanned files. maybe this would speed the routine up a bit?

(@Paradox551 maybe worthy for you as well)

You should not use this on production systems.

It looks like a regression in NC 19. This is something you should report to the bugtracker on github. Check out if someone already submitted it, perhaps it was already fixed in the daily version and the patch will be in the next NC 19 release. And it must be fixed in the code, rescanning the folder and stuff like that is just a workaround!

1 Like

@JimmyKater scanning unscanned files is not solution. Files where is 0 zero sized are not unscanned. They have own record in DB table oc_filecache but there is 0 recorded in table in size. In this mode file is not unscanned.
You should read the 1.st post.

But decission has been made already. We have stopped using nextcloud as our cloud storage. Too much bugs and this is the last serious bug not repaired in short time with some patch. So it is crap. Damaging office files has been top. Downgrade impossible without damage.
We migrated all data to concurent.

RIP nextcloud.
Thanks for all answers.

If you need guaranteed response and fix times, you should have looked for enterprise subscriptions. You were not alone with this problem, however it wasn’t a problem for everybody either. So it could be related to some configuration stuff that is difficult to track down.

1 Like

The configuration has not been changed for months. The bug has been involved by 19.0.0 update, what results in mentioned problems with size immediately.

This is an interesting point because the issue on github is older. It’s important but difficult to handle all this information. I hope a developer will help you out soon, perhaps they can add some additional logging at some parts of the code…