I’m currently running nextcloudpi in a docker container and I’m trying to restore a compressed backup of an old instance, including data. The backup is located on a USB drive which I have mounted and added as a volume. It sees the file (“path exists”) and I’m able to click Apply on the WebUI, but it gets stuck when trying to first extract the compressed files.
I’ve looked in htop to see if the CPU was being used, barely even 2%. I saw some events relating to the extraction (the initial commands) but they didn’t do much. It’s around 50GB total, but it’s over USB 3.0 and it wasn’t any further after an hour of waiting. Am I doing something wrong here?
P.S. I’m new to linux, so this might be a stupid thing I’m doing wrong
Why don’t you post some screenshots?
Thanks for replying,
nc-restore is stuck at this specific point:
When I SSH into the server and look at htop, it isn’t using any system resources:
However, it repeats the underlined line every so often. I assume this is the extraction command, but again, it’s not using any resources.
If you need more screenshots, let me know.
Update: After leaving it on for a few hours, it now says this message:
That is weird. Maybe try with
ncp-config see if we get some useful output
Doing so right now. It looks like it might be the same result, however. Little to no CPU usage and no other outputs other than this so far:
extracting backup file /ncp-backups/ncp-data-backups/nextcloud-bkp_20190308_1552076710.tar.gz...
EDIT: Here’s the full line of the command being executed:
EDIT 2: Still not past the extraction. Something is definitely blocking it or something.
very interesting, can you try running the command manually?
tar -I pigz -xvf xxx.tar.gz -C somefolder
Alright, doing this now.
I forgot to mention this before, but there are old ncp-restore.XXXXX folders inside the backups folder from when I tried numerous times before.
They all have varying amounts of progress, but it isn’t much in each one. Some have what I assume to be only the database backup (nextcloud-sqlbkp.XXXXXXX) while others actually have a few of my files in them.
So I’m doing it manually using the command. It’s working, but insanely slow. That’s my issue here, from the looks of it.
I just checked the ncp-restore.6UEFRI9, the one I left on for about four to six hours through ncp-config. There are the most files in there out of any other, equalling 8GB out of around 46GB. I might actually just be a massive idiot here and cancelling it before it can finish. But at the same time, this doesn’t explain why there is little to no CPU usage - why it is going so slowly. I even tried moving the backup off the USB drive to a local folder inside the container, but that produced an entirely new error:
Can only restore from ext/btrfs filesystems
Done. Press any key...
I am well confused here.
UPDATE: I got it working! I’m just an idiot.
I transferred the backup file to a folder on the docker host’s hard drive, rather than inside the container or on the usb. That did it. It extracted, albeit still a little slow, and completed the restore from there.
Still confused as to why the restore over usb was slow in the first place, however. Any ideas?
seems like a hardware issue then, I am glad it works
coukd you please give me a little more infos about where did you move the file and how did you point the ncp-restore to a file outside the container volume?