Nextcloud version : 19.0.0
Operating system and version: UBUNTU 18.04.4 LTS (Bionic Beaver)
Apache: 2.4.29
PHP version: 7.2.24
The issue you are facing:
We used to use webdav client (Raidrive, Windows) to access files on nextcloud trough drive Letter (eg. Z:). We have updated from 18.0.5 to 19.0.0 and problems occures. All files copied via connected disk drive are 0 size. The 0 appears although in webinterface.
I have opened oc_filecache table and there is also 0 for files copied via webdav. The size was not updated. On nextcloud physical storage the size of files are ok and also the content is ok. The problem probably is in updating of record in oc_filecache if webdav interface is used.
Web interface does not have this bug. There is all ok.
Is there any solution to solve size updating if webdav is used for copying files to nextcloud?
Where is the place in source code to do any change?
Is this the first time you’ve seen this error?: Yes
Steps to replicate it:
Run WebDAV client and create connection to NC instance
Copy some files to NC
Files have zero size, the 0 is in also in oc_filecache
The physical storage show the proper size of files and content is OK
The output of your Nextcloud log in Admin > Logging:
Nothing relevant. Also any errors if log is recreated.
The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):
Nothing relevant, changes weren’t changed before and after update.
The output of your Apache/nginx/system log in /var/log/____:
Nothing relevant. The webdav properly copies files to server, but after that the oc_filecache is not properly updated.
It is not the same. It is another problem. Files are properly copied but size is not updated in DB. After running occ files:scan the file size is updated to proper size.
this is the solution. just add a cronjob running all x minutes and scanning for new files.
usually that only is neccessary if you have stored your uploaded files on some external storage.
It is not the solution. Try to scan 4,5TB used storage from 10TB. It takes 5 hours. The cloud is enough busy to run scanner. Every couple of minutes sometimes seconds from 5 to 100 files are added. After that other people need to work with files. It is crazy to start scanner every x minutes.
It is impossible to monitor for new files, whose are added to 300 different folder randomly.
yesterday on the test server I updated the system to the daily version - and it looks like the problem has been solved.
Today I will test it as soon as I open my eyes wider
It looks like a regression in NC 19. This is something you should report to the bugtracker on github. Check out if someone already submitted it, perhaps it was already fixed in the daily version and the patch will be in the next NC 19 release. And it must be fixed in the code, rescanning the folder and stuff like that is just a workaround!
@JimmyKater scanning unscanned files is not solution. Files where is 0 zero sized are not unscanned. They have own record in DB table oc_filecache but there is 0 recorded in table in size. In this mode file is not unscanned.
You should read the 1.st post.
But decission has been made already. We have stopped using nextcloud as our cloud storage. Too much bugs and this is the last serious bug not repaired in short time with some patch. So it is crap. Damaging office files has been top. Downgrade impossible without damage.
We migrated all data to concurent.
If you need guaranteed response and fix times, you should have looked for enterprise subscriptions. You were not alone with this problem, however it wasn’t a problem for everybody either. So it could be related to some configuration stuff that is difficult to track down.
The configuration has not been changed for months. The bug has been involved by 19.0.0 update, what results in mentioned problems with size immediately.
This is an interesting point because the issue on github is older. It’s important but difficult to handle all this information. I hope a developer will help you out soon, perhaps they can add some additional logging at some parts of the code…