I now have a working NCP system on my intel nuc running debian buster, almost everything is working fine except for the fact that the nextcloudpi status page still says my internet check is “no” despite working well from outside the home lan.
My question is that for some reason I have xz running on my system, when I kill the process, it restarts from the root user. I had (days ago) tried to perform a full compressed backup via the ncp-config (so logged in as sudo) but that crashed after a couple of TB and I haven’t tried again since. Is it possible that for some reason it thinks it needs to keep trying on that backup?
I have also turned off all auto-updating of apps, nc and ncp, disabled the Extract app. Nothing in the logging tab gives any particular hints.
Any ideas or help on diagnosing this?
Please install with “apt-get” the program “lsof”
apt-get install lsof .
Look at the output of
lsof |grep xz
(use root user or sudo)
xz is normally no server process. it is a file compressing tool.
Thanks for the reply. Yeah, I know xz is not part of the server directly, but I thought it may have been called from the backup activities… Unfortunately my linux skills aren’t sufficient to fully diagnose the root cause.
I’ve tried your suggestion using lsof.
jonathan@nas:~$ sudo lsof | grep xz
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
lsof: WARNING: can't stat() fuse file system /run/user/1000/doc
Output information may be incomplete.
xz 4013 root cwd DIR 8,2 4096 1572865 /root
xz 4013 root rtd DIR 8,2 4096 2 /
xz 4013 root txt REG 8,2 81192 40896093 /usr/bin/xz
xz 4013 root mem REG 8,2 3036112 40906500 /usr/lib/locale/locale-archive
xz 4013 root mem REG 8,2 1824496 40905254 /usr/lib/x86_64-linux-gnu/libc-2.28.so
xz 4013 root mem REG 8,2 146968 40905874 /usr/lib/x86_64-linux-gnu/libpthread-2.28.so
xz 4013 root mem REG 8,2 158400 40905675 /usr/lib/x86_64-linux-gnu/liblzma.so.5.2.4
xz 4013 root mem REG 8,2 165632 40905016 /usr/lib/x86_64-linux-gnu/ld-2.28.so
xz 4013 root 0r FIFO 0,12 0t0 1954444 pipe
xz 4013 root 1w FIFO 0,12 0t0 1954445 pipe
xz 4013 root 2u REG 0,45 73 1949383 /tmp/#1949383 (deleted)
xz 4013 root 3r FIFO 0,12 0t0 1955324 pipe
xz 4013 root 4w FIFO 0,12 0t0 1955324 pipe
xz 4017 root cwd DIR 8,2 4096 1572865 /root
xz 4017 root rtd DIR 8,2 4096 2 /
xz 4017 root txt REG 8,2 81192 40896093 /usr/bin/xz
xz 4017 root mem REG 8,2 3036112 40906500 /usr/lib/locale/locale-archive
xz 4017 root mem REG 8,2 1824496 40905254 /usr/lib/x86_64-linux-gnu/libc-2.28.so
xz 4017 root mem REG 8,2 146968 40905874 /usr/lib/x86_64-linux-gnu/libpthread-2.28.so
xz 4017 root mem REG 8,2 158400 40905675 /usr/lib/x86_64-linux-gnu/liblzma.so.5.2.4
xz 4017 root mem REG 8,2 165632 40905016 /usr/lib/x86_64-linux-gnu/ld-2.28.so
xz 4017 root 0r FIFO 0,12 0t0 1954446 pipe
xz 4017 root 1w FIFO 0,12 0t0 1956072 pipe
xz 4017 root 2u REG 0,45 73 1949383 /tmp/#1949383 (deleted)
xz 4017 root 3r FIFO 0,12 0t0 1954447 pipe
xz 4017 root 4w FIFO 0,12 0t0 1954447 pipe
Thank you. I was hoping there is the parent process. Can you also post:
(only interessting parts for xz)
You can use “lsof” also for parent processes.
More details in /proc/process-id .
Oh wow, I didn’t know about pstree
│ │ ├─sudo(4018)───btrfs(4022)
│ │ └─xz(4017)
So looks like its related to btrfs-sync called from a cron job (https://github.com/nachoparker/btrfs-sync)
Ok. Is there a really problem with “xz” and so with “btrfs-sync”? Performance, cpu, … Post more details e.g. “top” or “htop” (install it) and sort in htop the memory.
Have you set the cron entry or is it default?
There isn’t a problem in so far as performance hit, it was that I didn’t recognize why it should be almost constantly running. If it is as benign as its part of the auto-ncp-snapshots to a second hdd which I have set up for daily, and I have added a dozens of gigs to the server while I am still loading data in, then perhaps that explains everything. Its all running on a 5 year i3.
I think we’re done here. Thanks for the help!
Perhaps it takes to much time and you must change something.
Sorry i do not know auto-ncp-snapshots. Perhaps because of different file systems there is no incremental backup and than full backup take a long time.
If you have daily versions on your second hdd then
must list more than 1 (hard) Links if it exists unchanged the last days.
Device: 802h/2044d Inode: 326712 Links: 1
I’ll have to dig a little deeper when I get home, only so much I can do while ssh-ing in right at the moment😏…
I was going for something like https://docs.nextcloudpi.com/en/how-to-backup-and-restore-using-nc-snapshot/
Will post again later
So the functionality is called nc-snapshot-sync as part of NextCloudPi. I decided to simply turn off the automated sync for now and experiment with it further later.