Backup App Out of Memory Errors - workaround?

I am experiencing the out of memory errors described in Request for occ documentation and fix for OOM on upload to external storage · Issue #308 · nextcloud/backup · GitHub and added information to that issue. There seem to be several open OOM issues with the Backup App and sftp like that which don’t appear to be going anywhere. That leaves me with a site rolling forward and no success in transferring backups off of the server, let alone offline on fixed media-- which leaves me … ‘deeply unsettled’.

My question here is this: while waiting to see if anything happens with those open issues, is there a decent temporary workaround? What would be the results, say, of using sftp/fsync to just SEND the restore point directory trees to the server backing the “External Storage”? Could I then use occ backup:point:scan to get NextCloud to recognize the presence of those remote Restore Points? I can then script the local server to write restore points to DVDs/BDs that just need to get rotated into/out of the safe (my original plan in any case). Would the NextCloud Backup App then expire ‘extra’ Restore Points on the External Storage as it ought?

Has anyone tried this approach? I can rate limit and schedule the transfers, lock and unlock the restore points around the calls, and it ought to be relatively scriptable, yes? Is there anything I need to watch out for in moving these files manually? I’m just leery of putting the effort into this if someone else has already tried it and run into the iceberg :wink: or just has a much better way of doing it.

This is to coordinate a local volunteer effort. I NEED to have scheduled full and incremental backups happen and get offlined without requiring constant intervention. Otherwise, I can see where things will get behind, we’ll lose data, and we’ll find out it didn’t get backed up. Periodic validation, rotation of DVDs, and restore-testing is fine, but it can’t depend on nightly… fiddling… to get it to happen, so I have a high motivation to find a scriptable, scheduleable workaround here as quickly as I can and then work on the long term fix, well, in the long term.

I also have no problem contributing back in the process if this is more widely useful, but I need to have a viable direction :slight_smile:

I am not using the backup app at all. I rely on snapshotting and rsync for full file backup. I wholeheartedly believes that it is best to tailor your backup strategy to your best needs. As an alternative, I can recommend this tool for easy backup and roll back:

Add rsync for occasional full user files, config files and database dump offloading, to have full data, config and restore for complete disaster recovery, I think you will good.

1 Like

I am starting to come around to your way of thinking given that the Backup app seems to be flat-lining. Backup is also one of the key apps keeping me from being able to upgrade NC past 25.x. I really wish NC would make an actual statement of whether they intend to fix the problems or not so that folks could make an informed choice about where to invest effort and when to upgrade.

1 Like

I may be about to pitch Backup in any case because after I ‘successfully’ used scp to make a remote copy of the Restore Point tree, I am getting SignatureExceptions in the logs for both the local and remote copies. The local copies were not modified in copying and the remote files appear to be good copies, so it looks like this is yet another layer of erratic behavior. I don’t like erratic behavior in backups… sigh.

Tbh I am not using rsnapshotting myself, as I am running my data storage pool under OpenZFS, hence have native snapshotting baked in. However I have tested rsnapshotting and it is working well enough that I would trust it.