I’m working on a new storage backend that will allow using existing local files from Nextcloud, like the existing Local backend, but will respect existing UNIX permissions and ownership. I’ve got it working pretty well now, I’m just facing a performance problem.
Every time I load a directory using the web interface, I see some part of the code is checking access to every file in the current directory and all direct child directories and opening them (possibly just to call
stat() on all files every time is no big deal, standard
ls does that too, but doing it for two directory levels unfortunately makes the web interface annoyingly slow.
Is this supposed to happen and can anyone point me to the class that does this?
As an aside, my logs show that every file is checked about three times, which I imagine slows things down even further. Should I implement some caching on the storage backend itself, or is that a bug?