How to reclaim space from misconfigured backup?

Hello, everyone. I am running NextcloudPi 1.51.0, and a few weeks ago I was trying to set up a backup to a NAS. Rather than accomplish that, it seems the system began backing up to the same drive the data is stored on. Every time I would spin up the system, it would go into maintenance mode, and the space used grew rapidly.

I plugged the HDD into another machine where I could browse the folders in a GUI, not being very conversant in command-line. I examined each folder for size, and found one named “appdata_(random characters)” that was over 2.5TB. Within that was a folder named “backup” taking up all that space, and within that was a large main folder followed by daily “differential” backup folders.

First, how do I cancel that daily backup to the data drive itself? I had never set up automatic backups in NextcloudPi, so I don’t know why this is happening every day or every time I spin up my machine. Second, is it then safe to simply delete the contents of that backup folder though a GUI?

Once I accomplish this, I will then move on to asking the proper way to back up to a NAS, because clearly my first attempt failed badly.

If you answer, please consider that I’m very much a Linux noob. If you make a recommendation that can’t be accomplished in the NCP GUI, I will need detailed instructions. Thank you!

I have saved one NC 12 server with all user accounts data in a system that the power supply stop working, and the replacement power supply was obsolete, even on ebay, to make it even harder it was an IDE HDD.

You maybe able to start a Live Ubuntu’s file manager in sudo mode and copy all your data to a new HDD as a backup for a fresh install.
That’s what I would do.

You need to change the permissions of all folders and files because you have just copied them as root in the GUI. The GUI may have a problem working in a Live image environment or with another system from my experience.

I have basically the same thing to do on my current NC 25 server that is at 96% disk usage and is shutdown atm.

How I rescued an old NC 12 server.
I ended up booting up my rescue IDE HDD PC with a Ubuntu Live DVD with the old IDE HDD attached.

Do the update.
Do the upgrade.

I then installed just SSH but not the server. It gets there as an automatic part of the SSH install.
You edit the ssh config to allow root and restart the service.

I then transferred the files via SSH to another PC.
Note: Don’t leave a browser open because it causes the Live image to crash. It did it to me halfway into the backup.

I setup a New install of NC 24 server on another PC and then synced all the user accounts data to the New NC 24 server.
This whole process took about 2 weeks to complete.

First of all you should have a backup of your database and your data at any time see Backup. Than it is easy to move the data to another server see Restore.

if the system is too full, I would clean up first. a very nice application to find full folders ncdu.

2 weeks? Can you post the breakdown of time? How much data do you have? Is your network or USB so slow?

1 Like

About 120 GB’s on Pentium 4, 1 GB. IDE 160 GB HDD drive. With 4 user accounts. I did each one in turn.
The Live image crashed half way into the backup, maybe because I left FF open on 1 GB on ram.

I had to change the permissions on the drive to 777 to have access to the data. It can never be used again in a NC server. That’s ok.

I have a 1 GBit backbone (sorta) 4 * 8 port switches at various places in my house. I dont wire back to one point and because its a rental and I am not allowed to drill holes in the walls.

The Pentium 4 era had a network speed 100 Mbit consumer devices using the PCI bus.
I started in the coax UTP era installing cabling running at 10 Mbit networks.

The time taken to install the new NC server on a 12 year old I5 with 8 GB ram and 2 TB HDD. I was not very long to do, I used the snap install method.

I synced the data from my backup one user at a time.
I also synced + added 386 GB with 150 k images. This I think this causes the journal process in linux on the HDD into high use. I have the same problem with my single drive NC 25 I5 server.

Hope that helps.

I re-read your first article but i do not know if it was useful. In the above quote there is a path of a directory. Maybe it belong to the app Backup. Maybe you can check the app. I do not use NextcloudPi so maybe it is in NextcloudPi included. But you wrote also about another Nextcloud instance.

Thanks for the replies, and sorry it’s taken a while to reply back. So far, though, no one has answered my first question: How do I disable the backup that continues every time I spin up the server? It is not the Nextcloud Backup app, because that is disabled; it seems to be happening through Nextcloudpi.

I had initially tried setting up a backup to a NAS on my local network, but that gave me an error I did not understand (remember I am very new to this). To remove that backup command, I went to the backup page in the Nextcloudpi GUI. I cleared the field “Destination directory,” unchecked the boxes “Include data” and “Compress,” set “Number of backups to keep” to zero, and hit Apply.

I don’t know how, but somehow it seems that this started a backup task to the same drive where all the data is stored. My drive is rapidly running out of space as the backup continues every time I start the server.

I think the problem is with the Nextcloudpi GUI - there is no “On” and “Off” switch visible to see whether backup is enabled or disabled (important to those of us who are unfamiliar or not very comfortable with command-line). Also, there is no place in the GUI to see active backup tasks that are scheduled or running.

So, please someone help me disable that backup so I can at least attempt other things, such as setting up a proper backup onto my NAS. Thank you.