SFTP external storage errors

I have these two consistent errors popping up in the logging section that I haven’t been able to find any information out about online.

[PHP] Error: Undefined index: size at /var/www/nextcloud/apps/files_external/lib/Lib/Storage/SFTP.php#456

[PHP] Error: Undefined index: mtime at /var/www/nextcloud/apps/files_external/lib/Lib/Storage/SFTP.php#455

Happens for pretty much every folder I browse on the mounted external storage.

My suspicion is that the folder meta-data of size and modified time have been polled yet by filescan()

Thoughts?

I would personally check the lines 455 and 456 of SFTP.php to get a better understanding. Have you do this already?

cd /var/www/nextcloud/
sudo -u www-data php console.php files:scan --all

I did, I checked on the file residing on Github (my server has the exact file with no modifications)

Here is the section that contains lines 455 and 456

/**
 * {@inheritdoc}
 */
public function stat($path) {
	try {
		$stat = $this->getConnection()->stat($this->absPath($path));
		$mtime = $stat ? $stat['mtime'] : -1;
		$size = $stat ? $stat['size'] : 0;
		return array('mtime' => $mtime, 'size' => $size, 'ctime' => -1);
	} catch (\Exception $e) {
		return false;
	}
}

The whole file can be found here

I believe I have figured out what is causing the error.

The external storage folders that are mounted have not be fully updated with files:scan (few hundred TB), therefore there are certain folders that don’t have an updated mtime or size in the db

great! you dont see the errors anymore?
didnt you tell me you ran ?:

cd /var/www/nextcloud/
sudo -u www-data php console.php files:scan --all

Oh no, they are still there but now I just understand why. :smiley:

files:scan --all runs with every cron.php (15 min interval)

From what I can gather so far it has not completed a full scan yet. I think my main issue is that it is scanning a snapshot folder (located in the external storage) as well which is drastically increasing the scan time. I am trying to figure out how to exclude that folder from being scanned currently.

i have a 2 tb set up as external hd. its hooked up to a raspberry pi. it took maybe 10 mins max for a 500gig of data to scan them all. just so you know… its not that long normally. once they are scanned…

1 Like

So my total storage scanning should be around 300TB which in and of itself is a big task.

I think the issue I am currently facing is that it is also scanning the .snapshot folder (which there are two, one for each volume that it is scanning)

Here is a quote from some owncloud documentation that I found that explains things a little better

“If you have a filesystem mounted with 200,000 files and directories and 15 snapshots in rotation, you would now scan and process 200,000 elements plus 200,000 x 15 = 3,000,000 elements additionally. These additional 3,000,000 elements, 15 times more than the original quantity, would also be available for viewing and synchronisation. Because this is a big and unnecessary overhead, most times confusing to clients, further processing can be eliminated by using excluded directories.”

got it. and the backup of those files has been done already? what if you get rid of the snapshots?

Definitely would not be able to delete the snapshots off the server, they are an integral part of our workflow.

I am exploring ways though to prevent (via the external storage device settings) the nextcloud instance from being able to view the snapshot folder.

I will report back on files:scan time once I have resolved snapshot folder issue

1 Like

all good. curious to see how you will be able to do it. i guess modifying console.php to fit your needs will be a way. let us know

1 Like

Well I may have found a solution. Someone made an app a while back that performs the directory exclusion

Then another user made a branch of the app to work with NC15

I modified again to function with NC16 and was able to successfully install it. I can now no longer browse the #snapshot folder!

I am letting it run it’s scan now and will report back with how it fared

So rather than running a files:scan --all command I just let it run based off the normal cron.php jobs and it appears that it has finally finished scanning.

Since this is an active storage array that our office uses the mtime and size of the folders changes rather frequently throughout the day. I still get some mtime and size errors in the log but much less than what originally was flooding the log. Once the scan fully catches up tonight I will try browsing after hours and see if the error still populates.