High I/O when the complete file is always read from hard disk

I have used nextcloud for several years now and generally everything is working fine. I started with about version 12 with Ubuntu 14.04, but is now using latest stable 18.0.3 in a ubuntu 18.04 system.

But I have had one problem all the time but have hoped that it would disappear with an update. Unfortunately it has not and now I would like to hear if someone could give me a hint where the problem is. Or if everyone has the problem.

The problem is that if you have big files on the cloud it always read the whole file from the physical disk although you have canceled the operation. It makes this operation as many times as you touch the file. If the files are big it very easily blocks the hard disk.

For example I have a movie on the cloud that is 30Gb. If I go to the file and press download it opens a menu where I can select what I want to do with the file. Directly when I open the menu it starts downloading the file with the speed of the network from the disk in my case about 40 MiB/s and that’s the right function. But if I press cancel the download through the network stops, but the reading from my disk jumps to 200MiB/s and continues until the whole file is read. Even worse is that if I make the same operation again it starts a new reading from the disk that then share the bandwidth from the disk. The same thing also happens when you start to look at a movie with the embedded viewer. Everything works fine until you cancel the movie or jump to the next movie. If you are searching for a certain movie you ends up with many apache threads that tries to read as fast as they can from the disk.

Hopefully it’s a simple configuration problem so please could someone try if you have the same problem.

I have the same issue, any one can help? Even in worst cases the website gets stuck, because Linux cannot load php scripts or configuration files from the disk because of high IO util

I have now updated my system to 20.04 and changed to a SSD drive for the data, but same problem still. Maybe we should write a bug report instead when no one has given any kind of hint where the problem could be. Can someone at least test if it’s working in the right way in their system. The test is really easy.
Add a big file to the cloud that is bigger than the RAM for example 30Gb. Press download and then cancel. Take a look at the disk io for example with iotop and if the reading of the disk disappears when you press cancel you are lucky and your system is working. If the disk is read for tens of seconds depending of the type of disk it’s working like my system.

I have the same Problem. Im am using nextcloud as a snap on ubuntu 20.04.
I did some more tests.
It doesn’t matter if you touch a file with webDAV or on the web.
This problem especially arises, when using KODI, as when you start it, it touches all files within the folder and subfolders, no matter what settings you have set. The Server then becomes unsusable, as apache tries to read your full media repository from the Disk all at the same time, which could be easily some TB. The load goes up to some 60-80.If you are patient and wait, It recovers when all files have been read, but stays unusable for hours.

It was already reported to github:

for files_external: