Postgres database failing with errors

Hi all,

Last week my postgres container stop working and I managed to get it up and running again today.
After a few hours it stopped working again. Does someone know what is causing this problem and how I can fix it?

Thanks!

I’m running lastest version of Nextcloud 21 on docker in unRAID.

PostgreSQL Database directory appears to contain a database; Skipping initialization

2021-08-30 11:16:51.006 CEST [1] LOG: starting PostgreSQL 13.4 (Debian 13.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2021-08-30 11:16:51.006 CEST [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-08-30 11:16:51.006 CEST [1] LOG: listening on IPv6 address "::", port 5432
2021-08-30 11:16:51.182 CEST [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-08-30 11:16:51.250 CEST [27] LOG: database system was shut down at 2021-08-28 21:25:27 CEST
2021-08-30 11:16:51.330 CEST [1] LOG: database system is ready to accept connections
2021-08-30 11:22:18.831 CEST [28] LOG: could not link file "pg_wal/000000010000000600000038" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 11:25:01.312 CEST [243] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 11:25:01.312 CEST [243] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:02:03.616 CEST [28] LOG: could not link file "pg_wal/xlogtemp.28" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:02:03.618 CEST [28] ERROR: could not open file "pg_wal/00000001000000060000003A": No such file or directory

2021-08-30 18:07:07.948 CEST [28] LOG: could not link file "pg_wal/xlogtemp.28" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:07:07.949 CEST [28] ERROR: could not open file "pg_wal/00000001000000060000003A": No such file or directory

2021-08-30 18:12:11.356 CEST [28] LOG: could not link file "pg_wal/xlogtemp.28" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:12:11.358 CEST [28] ERROR: could not open file "pg_wal/00000001000000060000003A": No such file or directory

2021-08-30 18:17:15.475 CEST [28] LOG: could not link file "pg_wal/xlogtemp.28" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:17:15.477 CEST [28] ERROR: could not open file "pg_wal/00000001000000060000003A": No such file or directory

2021-08-30 18:22:18.871 CEST [28] LOG: could not link file "pg_wal/xlogtemp.28" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:22:18.873 CEST [28] ERROR: could not open file "pg_wal/00000001000000060000003A": No such file or directory

2021-08-30 18:26:00.259 CEST [6353] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.259 CEST [6353] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.293 CEST [6355] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.293 CEST [6355] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.328 CEST [6356] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.328 CEST [6356] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.488 CEST [6357] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.488 CEST [6357] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.521 CEST [6358] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.521 CEST [6358] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.558 CEST [6359] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.558 CEST [6359] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.665 CEST [6361] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.665 CEST [6361] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.696 CEST [6362] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.696 CEST [6362] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.732 CEST [6363] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.732 CEST [6363] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.783 CEST [6364] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.783 CEST [6364] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.816 CEST [6365] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.816 CEST [6365] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.888 CEST [6366] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.888 CEST [6366] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:00.923 CEST [6367] ERROR: relation "oc_maps_photos" does not exist at character 13

2021-08-30 18:26:00.923 CEST [6367] STATEMENT: DELETE FROM "oc_maps_photos" where "file_id" = $1
2021-08-30 18:26:43.364 CEST [6465] LOG: could not link file "pg_wal/xlogtemp.6465" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:26:43.365 CEST [6465] PANIC: could not open file "pg_wal/00000001000000060000003A": No such file or directory
2021-08-30 18:26:43.369 CEST [1] LOG: server process (PID 6465) was terminated by signal 6: Aborted
2021-08-30 18:26:43.369 CEST [1] DETAIL: Failed process was running: UPDATE "oc_file_locks" SET "lock" = "lock" + 1, "ttl" = $1 WHERE "key" = $2 AND "lock" >= 0

2021-08-30 18:26:43.369 CEST [1] LOG: terminating any other active server processes
2021-08-30 18:26:43.369 CEST [6466] WARNING: terminating connection because of crash of another server process

2021-08-30 18:26:43.369 CEST [6466] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-08-30 18:26:43.369 CEST [6466] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2021-08-30 18:26:43.369 CEST [31] WARNING: terminating connection because of crash of another server process

2021-08-30 18:26:43.369 CEST [31] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-08-30 18:26:43.369 CEST [31] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2021-08-30 18:26:43.372 CEST [1] LOG: all server processes terminated; reinitializing
2021-08-30 18:26:43.477 CEST [6467] LOG: database system was interrupted; last known up at 2021-08-30 18:22:18 CEST
2021-08-30 18:26:43.477 CEST [6468] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.477 CEST [6469] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.477 CEST [6470] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.479 CEST [6471] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.479 CEST [6472] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.479 CEST [6473] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.526 CEST [6474] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.526 CEST [6475] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.527 CEST [6476] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.527 CEST [6477] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.528 CEST [6478] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.528 CEST [6479] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.574 CEST [6480] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.574 CEST [6481] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.575 CEST [6482] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.575 CEST [6483] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.576 CEST [6484] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.576 CEST [6485] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.590 CEST [6486] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.591 CEST [6487] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.672 CEST [6488] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.672 CEST [6489] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.674 CEST [6491] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.674 CEST [6490] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.674 CEST [6492] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.675 CEST [6493] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.741 CEST [6494] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.741 CEST [6495] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.741 CEST [6496] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.742 CEST [6497] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.742 CEST [6498] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.743 CEST [6499] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.764 CEST [6467] LOG: database system was not properly shut down; automatic recovery in progress
2021-08-30 18:26:43.770 CEST [6467] LOG: redo starts at 6/39CB9E50
2021-08-30 18:26:43.783 CEST [6467] LOG: redo done at 6/39FFFF70
2021-08-30 18:26:43.813 CEST [6500] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.813 CEST [6501] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.813 CEST [6502] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.814 CEST [6503] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.814 CEST [6504] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.815 CEST [6505] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.868 CEST [6506] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.869 CEST [6507] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.880 CEST [6508] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.881 CEST [6509] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.929 CEST [6510] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.929 CEST [6511] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.930 CEST [6512] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.930 CEST [6513] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.932 CEST [6514] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.933 CEST [6515] FATAL: the database system is in recovery mode
2021-08-30 18:26:43.989 CEST [6467] LOG: could not link file "pg_wal/xlogtemp.6467" to "pg_wal/00000001000000060000003A": Function not implemented
2021-08-30 18:26:43.990 CEST [6467] PANIC: could not open file "pg_wal/00000001000000060000003A": No such file or directory
2021-08-30 18:26:43.994 CEST [1] LOG: startup process (PID 6467) was terminated by signal 6: Aborted
2021-08-30 18:26:43.994 CEST [1] LOG: aborting startup due to startup process failure
2021-08-30 18:26:44.015 CEST [1] LOG: database system is shut down```

I looks like your DB can’t start for some reason… initially it starts, detects incosistent DB files and starts recovery process… but then it terminates

I have no proof so far but it looks there are issues when DB process access files while/after writing

I recommend you to double check if the disk is healthy, file system has no issues and if the problem persists maybe ask postgres community for support and recovery options.

I think I found the problem. After reading here: [Support] jj9987 - PostgreSQL - Docker Containers - Unraid

I changed my data path for postgres from /mnt/user/appdata/postgres to /mnt/cache/appdata/.postgres

I renamed the folder to start with a leading dot. This prevents the mover in unRAID from touching it. Directly accessing the folder on the cache drive bypasses the fuse filesystem used by unRAID. Database is up for a few hours now and no problems yet. :slightly_smiling_face:

1 Like

great you made progress. from what you say I’m surprised you DB was working before… but maybe there was some update which changed the underlying system…

ummm… how would we all know here how you setup your instance? have you wirtten about your setup and such, especially unRAID, etc.?

Well next time pls come up with more infos about your setup.

1 Like