I give up. Sorry nextcoud, nice in theory very poorly implemented

Nextcloud just seems like alpha software. It sort of works, most of the time.

I’ve been using it for 6 months now but I give up. I want something that just works, nextcloud does not.

I have some gripes wit the way the whole thing works:

  • Syncing takes forever because it compares each file individually to the server.

Here’s a suggestion: run find -ls on the client, send the output to the server, run the same command on the server and diff the result. The current approach of a HTTP request for each file flat out sucks.

Then the same thing happens for transfers. Why is it so hard to zip the files that have changed before uploading/downloading them?

I could put up with this incredibly inefficient implementation if it actually worked. It often doesn’t. And when a problem does occur, it doesn’t give meaningful error messages:

“Operation canceled”
“connection closed”

certain files just refuse to sync and this happens near daily. I end up having to delete files form the server, then from local and recreate the files to force them to sync. It seems to be git repositories that cause the most issues, but it makes it impossible to use.

I give up. Sorry guys, it’s a potentially great tool but it’s so poorly implemented it makes it a frustrating experience.

my previous rsync cron, while crude, actually worked without needing constant micromanagement.

1 Like

Which Nextcloud Version you have try? Nextcloud 13 is really stable, Nextcloud 14 not really.


so sad that you are facing these kind of problems.

and of course it’s easy to blame some software for those rather than trying to find the problems in your own setup (which you didn’t give away - so noone knows what, where and how you’re running your nc-instance).

so all i could tell you is no secret: even if YOU are experiencing problems hundred of thousands other users don’t. which - for me - shows clearly that nextcloud can’t be so bad as you just made it.

1 Like

I’m using 14 but had upgraded from 13, where I faced the same issues.

It seems to be files that are changed quickly and frequently on the client that cause issues.

I believe the following process causes files to consistently stop syncing:

  • File is modified
  • File starts to sync
  • File is modified before sync finishes

For example, if I create a folder and quickly rename it, it often fails to sync. It starts syncing “new folder” but then it’s renamed before the sync is complete and problems occur.

Git repositories seem the most affected because of the number of files and frequency of change. Quickly switching between branches of a git repo almost always cause several files in the .git directory to stop syncing.

There’s not even a way to force it to override. I’d like an option to “Force sync from client” or “Force sync from server”. The existing force sync does nothing. Once you start getting “operation canceled” on a specific file, it’s stuck until the file is deleted from both client and server then recreated.

My setup is the following:


  • Arch Linux
  • Two folders synced from home directory
  • About 4gb of synced data
  • Official nextcloud client


  • Nextcloud 14
  • CentOS 7 VPS
  • Dual core, 2gb RAM
  • nginx, php 7.2, redis for memory cache

if you have suggestions to improve nc why don’t you file them under github as an issue? i don’t see no sense in just ranting off here… be clever and file your thoughts there and you’ll see what’s gonna happen. that would be more effective, in general.

one personal thought of mine: maybe the problem is your hardware? a faster one would maybe handle such requests better…

as for this one… it might be suitable for your demands. for mine it would be bad. like: the de/compression needs to be do done by the involved hardware on server and client. so at least for small files it would take MY setup way longer to compress - transfer - and decompress them again. maybe it would make sense for files from a certain size on. which means - to have a benefit from it you first would need to calculate a value from which on it would be better to do a compressed transfer rather than an uncompressed one. which means this value should be calculated for every involved hardware and connection. which is possible, of course but would take time as well.
but feel free to file this suggestion as well to github.

Faster hardware just to handle a few file transfers? Really?

Probably not. The biggest bottleneck with the nextcloud client is that it has to upload each file to the server individually.

  • For small files, this is slow because it has to connect, transfer data then disconnect for each file. Uploading a single larger file will always be faster because the connection remains open. I’ve seen nextcloud take minutes to sync under 1mb because it’s 1mb of hundreds of small files (a git repository)

  • For larger files, closing and reopening connection is not so much of an issue but large files are limited by your internet connection. If you had a modest compression of 10% (so a 10mb file becomes 9mb) then unless your network can transfer 1mb faster than your PC can compress a 10mb then you’ll see a performance improvement.

For large files it won’t make much difference, for everything else sending dozens of files at once in a zip will not only make the transfer faster but reduce the overhead on the server as it won’t have to handle so many HTTP requests per second.

from what you said it’s not really “a few”, only.

so you want the nc-client (and server, as well) to decide which files to group and zip? or how does the zipper know which files to zip and which not?

There are dozens of ways to do this. Here’s a couple:

  1. A database of md5sums/timestamp. The server keeps a three column database: file path/md5sum/mtime. The client keeps its own database. On sync the client downloads the servers database and compares them. It can then see any files that have been changed and know what to zip up and send across (or the inverse, send its database to the server for downloads).

  2. Tracking commits like git does. Every time a batch of files is uploaded it is given an ID and date which is stored alongside a list of files uploaded to the server. The client can send it’s last sync ID to the server and the server can work out exactly which files have changed since the client last synced.

1 Like

sounds like a plan, so why not just filing your suggestions to github?

1 Like

Honestly, your issues sound like PEBCAK errors. I first tried ownCloud when it was version 3 - that was the last “alpha software” version I used (with 4/4.5/5 feeling like beta software). Claiming NC 14 is “alpha” just sounds bitchy and pathetic.

Seriously, instead of reporting legitimate issues on github or anything useful, you come here to proclaim to everybody “PAY ATTENTION TO ME! I DIDN’T LIKE IT!”

slow clap

For your usage, perhaps yes. Seriously, your server sounds barely better than an RPI. My phone from 2013 has the same amount of RAM as your server. My telephone.

Your not doing “a few file transfers” if you have an ongoing issues with altered files changing multiple times before the first sync finishes.

For someone with so many bash-style coding suggestions, you couldn’t actually post somewhere your posts could be critiqued by the developers.


During sync the server has over 900mb free memory. Most mid range VPS come with 2gb RAM, low-medium traffic websites do not require any more than that ( see https://www.vps.net/products/ssd-vps/ or look for vps’s elsewhere). Congratulations your phone has more RAM that most web servers.

I wouldn’t mind if the software gave meaningful error messages that I could do something about.

How the hell is this helpful in any way?

On the server there’s nothing in /var/log/nginx/error.log or /var/log/php-fpm/error.log, nothing in the system journal and nextcloud/data/nextcloud.log contains nothing dated after the date I originally installed nextcloud.

I have no idea if the developers read the forums or not. Though I question the point of having forums if they don’t.

Anything in https://nextcloud.yourdomain.com/index.php/settings/admin/logging

Log settings are detailed here https://docs.nextcloud.com/server/14/admin_manual/configuration_server/logging_configuration.html

I have used Nextcloud on a RPi and it worked fine. The software might not be perfect but it is not “alpha”. I doubt the German Federal government would use it if it was. https://nextcloud.com/blog/german-federal-administration-relies-on-nextcloud-as-a-secure-file-exchange-solution/

Now my installation runs on an x86 single board computer on my desk. I have never had a connection error.

I would suggest looking at the quality the your internet connection to your VPS.

Hi Tom_Butler,

I feel sorry for your bad experience with Nextcloud so far. And I’m sorry as well for the harsh responses here in the Forum. This is not how we should treat ideas for improvement.
Nonetheless the statement that NC is in alpha state is a little bit over the top :wink:

To make the best out of it:
May I ask you to post your ideas for improvement and the issues you discovered on Github?
The developers usually don’t read the forum and it’s only the community trying to help others here. So for changes in the code, Github is the right place to go. And your feedback and ideas are very viable to improve this software for all of us. So I very much appreciate any good idea to make NC work even better. Synching git projects with NC is probably something only few people did so far with that intensity, meaning that this issue has probably not been discovered before.

If you have the knowledge to improve the NC code yourself, the community and even more the developers will appreciate that as well.
NC is great for many users already, but not for all yet. However, we all can help to make it great for everybody.

And for the closed connections when synching the files, maybe you can run a network trace and find out what’s causing the connection abortions. Maybe there is something on your server (OS side or webserver side) which is not working properly and can be improved/ corrected.

I don’t believe it’s an issue with the connection. If I use apachebench I can run 1000 requests with 100 concurrency and not get any failed requests. Obviously each request is not doing as much as syncing one of the 16kb files that failed to sync but it does show the issue is with nextcloud and not the connection between me and the server.

Apachebench output:

Concurrency Level:      100
Time taken for tests:   66.194 seconds
Complete requests:      1000
Failed requests:        0
Non-2xx responses:      1000
Total transferred:      11933124 bytes
HTML transferred:       10727000 bytes
Requests per second:    15.11 [#/sec] (mean)
Time per request:       6619.424 [ms] (mean)
Time per request:       66.194 [ms] (mean, across all concurrent requests)
Transfer rate:          176.05 [Kbytes/sec] received

Connection Times (ms)

It doesn’t look like a connection issue to me.

I agree. If it’s not the php handler either, then it should be the NC web application or maybe the NC desktop client. I’m curious if there are other WebDAV sync clients and if they perform better.
However, as you already pointed out, a sync process which syncs file after file could be an issue. Especially while WebDAV is not known to be the fastest file transfer mechanism.

I’m wondering if there are options to delay the sync of files and sync a larger bunch of files then - or just hold the connection open for quite some time, to avoid the necessity to establish a connection for each file.

Anyway, if you post your ideas on Github, could you post a link to your issues here as well, please?

Git repositories caused some severe problems in the past but they were fixed mostly. Do you see anything in the F12 log window on the client? Usually the verbose errors show up there.

Do you run server side encryption? It is a major troublemaker, disabling it solved a ton of problems making Nextcloud quite reliable. You only learn the hard way…

We’ve given up as well. Too many new features with too many bugs. At some point you have to step back and make things work well before you add more stuff that doesn’t on top of other stuff that doesn’t. There are major, critical features that have been broken since version 12. The idea here is great, but the code just has to be better.

1 Like

We have hundreds of active users on our nextcloud install and haven’t experienced the problems reported in this thread. However, it sounds like @Tom_Butler has a very different use case (I doubt any of our users are hosting git repos for example). No software works for all use cases, and lots of small, regularly changing files might not be it’s strong suit.

1 Like

just a note here: i also had the problem of never-ending syncs and always restarting. But i upgraded from owncloud, which stored checksums in database for each file.
Nextcloud doesn’t use that column, so cleaning the table column in database by hand solved the problem. Maybe some data had bee migrated here from Owncloud too? If thats the case, check the Table to store files in database.
Sadly i don’t remember the Tablename, nor the Statement to clean, but i found it in websearch…

I have the same problem as you from Nextcloud 13.
For me the first sync goes bad as this one, then a few seconds after, the sync goes good.
It’s quite a big issue because i give nextcloud to severals clients and i have to explain this bug saying, « don’t be afraid, wait some more minutes if the sync is still bad ».

There is two issues on github :

1 Like