Oh no, they are still there but now I just understand why.
files:scan --all runs with every cron.php (15 min interval)
From what I can gather so far it has not completed a full scan yet. I think my main issue is that it is scanning a snapshot folder (located in the external storage) as well which is drastically increasing the scan time. I am trying to figure out how to exclude that folder from being scanned currently.
i have a 2 tb set up as external hd. its hooked up to a raspberry pi. it took maybe 10 mins max for a 500gig of data to scan them all. just so you know… its not that long normally. once they are scanned…
So my total storage scanning should be around 300TB which in and of itself is a big task.
I think the issue I am currently facing is that it is also scanning the .snapshot folder (which there are two, one for each volume that it is scanning)
Here is a quote from some owncloud documentation that I found that explains things a little better
“If you have a filesystem mounted with 200,000 files and directories and 15 snapshots in rotation, you would now scan and process 200,000 elements plus 200,000 x 15 = 3,000,000 elements additionally. These additional 3,000,000 elements, 15 times more than the original quantity, would also be available for viewing and synchronisation. Because this is a big and unnecessary overhead, most times confusing to clients, further processing can be eliminated by using excluded directories.”
So rather than running a files:scan --all command I just let it run based off the normal cron.php jobs and it appears that it has finally finished scanning.
Since this is an active storage array that our office uses the mtime and size of the folders changes rather frequently throughout the day. I still get some mtime and size errors in the log but much less than what originally was flooding the log. Once the scan fully catches up tonight I will try browsing after hours and see if the error still populates.