Fragmentation nextcloud windows client

HI to all the busy Developers,

I am using the nextcloud with 5 TB of Space at Hetzner in Germany. I have now 3.5TB in use. Now I did install a client on a windows 10 PC and did add a harddrive with 5TB for make a “Backup” from all my files from the Nextcloud. So I am using the Windows Client and try to copy everything to the disk. At the Beginning everything worked with full speed. Now I did get more then 1TB written on the drive. Now I see in the taskmanager the drive is to 100% occupied. In the recource monitor it writes slowly and still 100% occupied. So I did stop the sync process. Then I did start a check with the defrag tool. Now it shows for every file on the drive hundreds and more defragments.
I think the fragments are not really good for the performance. So when I check on the files who writes on the drive there are many parallel. Is it possible to get the files from the cloud and write it first in the memory and when it is complete then write it on the drive? This would make it more performant.

Thanks for reading my bad english.


You can try to change the number of parallel uploads and downloads and also the chunksize:

So if they are fragmented, you have rather large files?

Thanks for your Idea. I will try it. I have like 500000 Files… most files are more then 20MB or larger up to 5 GB. I did it on


Now it is much faster witht the donload rate. I will check later on the amount of chunks.

It gets back to the old situation. Every File what is build new on the drive is fragmentet often more then 300 junks for a 100mb file.

Hope somebody can find a solution.

Not sure how it works with merging the chunks, they could be responsible for the fragmentation if the chunks are placed in different places. However, the junks are by default 5 MB, so for a 100 MB file, it should be about 20 junks.

I did write it to the devs. But there is not really anybody who will take care on it. I am now since 5 days in defrag mode on this drive. It is really bad. Hope Someone will find a solution.

Does this happen outside Nextcloud as well if you create a number of large files?

Just thinking, e.g. if your drive has problems and can’t use a larger number of sectors, would this also result and show as fragmentation?

ah, I found the related github issue:

If you want a backup of the data and you just transfer file by file (and not keep everything synced), perhaps transfer files directly via webdav (winscp or similar). They download the whole file and not only chunks (that might create the fragmentation).

I use the same kind of drive for backup my other files. If I write them on the drive there is almost no fragmentation. I think it comes because the files are not get build together after sync from the server first and then transfer it to the endpoint folder. You can notice there are a lot of files are in work at the same time and write like crazy parallel on the drive. After a while the usb port gets busy and starts hang on 100%. I did the same job on a ssd what is in the computer. There are also Tons of defragmented Files on the drive. I think this is not good and not normal. It needs a different solution. I takes days to fix all the defragmented files so the speed comes up again.

This are results from the SSD inside of the computer. I use for this investgation the software WinContig. It can show you on filebase the amount of junks. You can also only check on folder base.

That is not a backup. In case of data loss or corruption on your Nextcloud, you have a good chance that the sync client will sync the defective data from your Nextcloud.

I have a backup from the Nextcloud with the Provider. But I will have a local copy from my files because the Internet is not fast enough and sometimes it does not work.

I had allready a issue with corruped files… while I did upload a bunch of files and later on there are tons of files are bad. So from all Files I have a master backup before I upload the files.

Did you try to change the chunksize, by default files split in chunks of 5 MB? If you put a very high setting there, it should not be chunked, and then it perhaps does not split the file in fragments on the disk.
The file view you posted, if a file was split in fragments of 5 MB, there would be less fragments. So maybe it is not even the chunks…

We probably have to wait for the developers to work on this issue.

I did try the junk size. But there is no difference. We can wait. I do defrag all the time. So the drive gets back to normal.

I did investigate a little more and did figure out the WD Drive WD50NDZW is a SMR Drive. This kind of drives are good for onetime store files. If you use a SMR drive with Nextcloud it makes after a while problems because it does not like all the junks. If you run under windows as a administrator in cmd chkdsk d: /f it will get better. It reorganice the drive and it will get faster again.

Still it does not solve the problem with this millions of fragments but it speeds up the drive again.

In NAS are also SMR Drives build in. There is the same problem. But there you must check how it works to get a solution.

I checked a desktop system where it is synced on >250 GB, there is no fragmentation. But it was grown over a long period of time and it might have cleaned itself over time.

You probably want to use the sync client that if you connect the drive the next time, you just want to transfer the files that were changed. Not sure, if that worked well to use some rsync-based backup (there are countless flavors) on a davfs mounted drive on a linux machine.