Should I re-setup a very long living (always updated) installation?

I have a Nextcloud installation running for about 5-10 years (many) meanwhile. It has always been updated, currently I’m on 29.0.7.

However it grew over the years, with times where e.g. plugins messed stuff up etc., every time I managed to get it running again.

It has just a bunch of users (my family), but about 2.5TB of file storage. The database is huge and since some time obviously there is some bigint conversion suggested by the Configuration page, which obviously doesn’t do anything when starting it (at least the occ command doesn’t finish even after days and I don’t see a single cpu, IO or database activity).

Now I even see strange behaviour with the windows client. Tried to remove the sync directories and re-set them up.

Now during sync folder setup it even tells me on two different folders, that the one is INSIDE (which is NOT) another folder that’s already under sync.

I feel it’s time for a complete makeover. Would you suggest this?

If yes - currently I’m using the plain nextcloud Image from docker hub, would it be beneficial if I change to the aio image? When googling, this one is addressed everywhere.

Maybe if there is a way to “makeover” the whole database, this would be a good choice as well.

Btw. my database is about 4GB in size (about 3GB in oc_filecache, the rest distributed). Does this sound realistic for such an instance?
Using Postgres 13.

Definitely. I always install the latest Debian and run the ISPConfig auto-installer over it because it installs all the tools needed for Nextcloud.

In general I don’t feel “complete makeover” is a good choice. It might be a good solution if you somehow screw up your installation but you should know what you do - e.g. starting with empty DB will likely reduce your oc_filecache but is it worth loosing all the versions, file_trash, shares etc?

I would look at the table first - often such extreme sizes result from external storage… maybe “cross-shared” mounts etc… you can start with looking at this table with simple select * from oc_storages and checking if there are obsolete storages… you can add count and sum clauses to understand where the most items come from maybe Oc_filecache.ibd very large helps…

Once you finished clean-up (and hopefully report successful steps here) you can look at Nextcloud docker-compose setup with notify_push (2024) and Nextcloud docker compose setup with Caddy (2024) to improve your installation with modern Docker technics

Truenas. Then you can add disk redundancy via ZFS, plus Nextcloud is officially supported.

It is worth it’s weight in gold for that 2.5tb to be protected.

All internal structures such as shares are lost during a new installation. I don’t know whether this is desirable. Conversely, if you simply restore the backup of the database, e.g. to retain the shares, then the benefit is not really great.

Perhaps it would make more sense to carry out a few maintenance measures on the current installation.

Incidentally, I wouldn’t worry about the software. The integrity check should recognise outdated software components.

The problem is usually outdated hardware, the no longer desired architecture (e.g. no AIO) and the database.

1 Like