S3 primary storage : fopen() each file on directory listing!

Hi,

I use S3 as primary storage and I wonder why Nextcloud is doing an fopen() on each S3 file when I list a directory ?
I got some 403 Forbidden errors from my S3 provider, because files are on GLACIER storage-class. But I don’t know why nextcloud needs to fopen() each file in a directory ! It is supposed to already have all information about files in database. And it doesn’t break listing even with 403 errors.

{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A15060): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}
{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A15060): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}
{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A15060): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}
{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A15060): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}
{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A17905): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}
{"reqId":"8UzPLtZdElD9syEp1D3f","level":3,"time":"2020-04-14T12:21:16+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"PHP","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives","message":"fopen(https://s3.fr-par.scw.cloud/cloud/urn%3Aoid%3A17905): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden\r\n at /var/www/cloud.io/lib/private/Files/ObjectStore/S3ObjectTrait.php#72","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36","version":"18.0.3.0"}

It is waste of resources ! It tries 8 times to fopen() each file of a directory !

Can someone explain why it does this ?

Same thing using “filesystem_check_changes” => 0,

Okay, I need more context here. Where/how are you seeing fopen on primary S3?

I explain :

I use S3 primary storage.
I have some files in a directory called “Archives”.
When I open that directory (with browser), each file seems to be fopen’ed().
I just did a directory listing, nothing more.
Listing is OK, but it’s useless to get all those fopen() just for listing !
I see this fopen() because my S3 provider is returning “403 Forbidden” for some files (because I set them on GLACIER storage).

Is it clear ?

No? I’ve never seen this on nextcloud.

Perhaps someone else can pitch in.

how can I debug more on this ?

It seems to be related to “Recalculation of unencrypted size”…

I think I found the origin of the problem.

Here : https://github.com/nextcloud/server/blob/master/lib/private/Files/Storage/Wrapper/Encryption.php#L487

We can see on default encryption module that if $unencryptedSize === $size for any file, then we recalc the unencrypted size for the file (need to fopen file).

And when S3 is used as primary storage, this condition is always true !
I debug adding the line in the function verifyUnencryptedSize :

$this->logger->error('Recalc: '.$path.' - '.$size.' - '.$unencryptedSize);

And I can see in logs that $unencryptedSize === $size !!

{"reqId":"R0DWmexc7s7Khd3jiHPv","level":3,"time":"2020-04-14T15:01:46+00:00","remoteAddr":"51.15.xx.xx","user":"florent","app":"no app in context","method":"PROPFIND","url":"/remote.php/dav/files/florent/Archives/Leak%20firmware%20Livebox","message":"Recalc: files/Archives/Leak firmware Livebox/lb4-firmware.zip - 49672570 - 49672570","userAgent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0","version":"18.0.3.0"}

We should not recalculate anything when S3 is used as primary storage ! Unencrypted size should be stored in database (actually zero).