File Sync Fails, cites Data Corrupted, SQLSTATE[XX001

Recently, my Nextcloud instance is having issues syncing new files. For background, a little over two weeks ago I expanded the primary disk drive in preparation for upgrading the Ubuntu Server version. No issues since then. Late last week I had a Windows Desktop client attempting to sync several image files, and the server hungup and went unresponsive and I had to reboot it. Since then, I sporadically get this error log, and file sync fails. A few seconds later it picks up and syncs the file. The size of the file doesn’t seem to matter. I’ve tested it with a 80 MB file that failed, and then synced, but then a small file, under 1 MB failed to sync, same error.

I have a snapshot of my VM but it’s from two weeks ago, before I expanded the drive. I tried using OCC Maintenance Repair but it didn’t seem to do anything.

I’m not yet able to upgrade to the next version of Nextcloud due to my Ubuntu server being too old, but I have some PHP dependencies I need to upgrade first before I can update the O/S.

Anything that I can do about this error, or repair the database somehow?

[webdav] Error: An exception occurred while executing a query: SQLSTATE[XX001]: Data corrupted: 7 ERROR: could not read block 18432 in file “base/16385/16815”: read only 0 of 8192 bytes
PUT /remote.php/dav/uploads/xxxxxxx

Nextcloud version 28.0.5
Operating system and version Ubuntu 20.04.1
Apache or nginx version Apache2
PHP version 8.1.28

The issue you are facing: Files fail to sync, desktop client and Logging indicate the SQLSTATE error, then file will eventually sync

Is this the first time you’ve seen this error? Y

Steps to replicate it:

  1. Use Windows Nextcloud desktop client to try and sync some files
  2. Issue does not occur each time, seems at random

The output of your Nextcloud log in Admin > Logging:

    /var/www/nextcloud/3rdparty/doctrine/dbal/src/Connection.phpline 1938

    Doctrine\DBAL\Driver\API\PostgreSQL\ExceptionConverter->convert()

    /var/www/nextcloud/3rdparty/doctrine/dbal/src/Connection.phpline 1880

    Doctrine\DBAL\Connection->handleDriverException()

    /var/www/nextcloud/3rdparty/doctrine/dbal/src/Connection.phpline 1208

    Doctrine\DBAL\Connection->convertExceptionDuringQuery()

    /var/www/nextcloud/lib/private/DB/Connection.phpline 294

    Doctrine\DBAL\Connection->executeStatement()

    /var/www/nextcloud/3rdparty/doctrine/dbal/src/Query/QueryBuilder.phpline 386

    OC\DB\Connection->executeStatement()

    /var/www/nextcloud/lib/private/DB/QueryBuilder/QueryBuilder.phpline 280

    Doctrine\DBAL\Query\QueryBuilder->execute()

    /var/www/nextcloud/lib/private/Files/Cache/Cache.phpline 407

    OC\DB\QueryBuilder\QueryBuilder->execute()

    /var/www/nextcloud/lib/private/Files/Cache/Cache.phpline 272

    OC\Files\Cache\Cache->update(
      "*** sensitive parameters replaced ***"
    )

    /var/www/nextcloud/lib/private/Files/View.phpline 1589

    OC\Files\Cache\Cache->put()

    /var/www/nextcloud/apps/dav/lib/Connector/Sabre/File.phpline 401

    OC\Files\View->putFileInfo()

    /var/www/nextcloud/apps/dav/lib/Connector/Sabre/Directory.phpline 148

    OCA\DAV\Connector\Sabre\File->put()

    /var/www/nextcloud/apps/dav/lib/Upload/UploadFolder.phpline 51

    OCA\DAV\Connector\Sabre\Directory->createFile(
      "*** sensitive parameters replaced ***"
    )

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/dav/lib/DAV/Server.phpline 1098

    OCA\DAV\Upload\UploadFolder->createFile(
      "*** sensitive parameters replaced ***"
    )

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/dav/lib/DAV/CorePlugin.phpline 504

    Sabre\DAV\Server->createFile(
      "*** sensitive parameters replaced ***"
    )

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/event/lib/WildcardEmitterTrait.phpline 89

    Sabre\DAV\CorePlugin->httpPut()

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/dav/lib/DAV/Server.phpline 472

    Sabre\DAV\Server->emit()

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/dav/lib/DAV/Server.phpline 253

    Sabre\DAV\Server->invokeMethod()

    /var/www/nextcloud/apps/epubviewer/vendor/sabre/dav/lib/DAV/Server.phpline 321

    Sabre\DAV\Server->start()

    /var/www/nextcloud/apps/dav/lib/Server.phpline 373

    Sabre\DAV\Server->exec()

    /var/www/nextcloud/apps/dav/appinfo/v2/remote.phpline 35

    OCA\DAV\Server->exec()

    /var/www/nextcloud/remote.phpline 172

    undefinedundefinedrequire_once(
      "/var/www/nextcloud/apps/dav/appinfo/v2/remote.php"
    )

Caused by Exception SQLSTATE[XX001]: Data corrupted: 7 ERROR: could not read block 18432 in file "base/16385/16815": read only 0 of 8192 bytes

You didn’t mention what database you’re using, but I assume PostgreSQL.

Something lead to corruption on your database. There’s a chance its a bug or something in the particular PostgreSQL version you’re using, but more likely the improper shutdown or some other hardware matter.

You won’t be able to repair it using anything built into Nextcloud itself. This is a matter that must be addressed within your chosen database implementation.

You can try searching [XX001]: Data corrupted: 7 ERROR: could not read block or similar in your favorite search engine for further resources.

Hopefully you have a backup from before the corruption appeared.

I apologize, yes I’m using PostgresSQL

The backup/snapshot I have is from two weeks ago, and the issue first appeared a few days ago. So I could roll it back to that, but I’d lose two weeks plus of data in Nextcloud due to rolling it back that far. :frowning:

Thank you for the links, I will have to review those and see if I can glean anything from that.

I’m not entirely sure what caused it except whatever caused the server lockup on the one sync the desktop client was doing.

I’m not entirely sure what caused it except whatever caused the server lockup on the one sync the desktop client was doing.

There may be some evidence in your server’s logs (e.g. journalctl and/or /var/log), particularly your kernel/console logs that will help you isolate root cause.

The backup/snapshot I have is from two weeks ago, and the issue first appeared a few days ago. So I could roll it back to that, but I’d lose two weeks plus of data in Nextcloud due to rolling it back that far.

Yeah, that’s never fun. Sorry to hear it. I know this won’t help you go back in time, but for the future: a once-a-day (or more!) automated dump (backup) of your database might be a good time investment to protect against this sort of thing: Backup — Nextcloud latest Administration Manual latest documentation

I have a script set to backup my whole VM nightly, which I’ve had to use in the past due to failed Nc upgrades and sort, but it overwrites the backup file, so at this point the only backup outside of the snapshot will likely have the corruption on it too

Gotcha. I’d suggest doing the db separately from data/OS/etc. (So something outside of a VM snapshot).

Db dumps tend to compress well and are easier to keep around a bunch of days/weeks/months worth.

Due to the Ubuntu version of my existing server, PostgreSQL version and PHP, I’m considering standing up a brand new/current install. Obviously the issue then is backing up my current and restoring.

If I do a dump of my current PostgreSQL database, would that dump also include any corrupted table data? I see there is documentation on migrating servers, backing up and restore, etc, but I would assume standard procedures are meant for healthy databases?

I was able to get some assistance on a PostgreSQL forum. I was able to query the DB and determined that the Regclass of the referenced OID is the following:

fce_ctime_idx
(1 row)

I’m assuming this is something to do with time or time settings? The postgresql suggestion was if the item was an Index, I would be in luck, and would just need to drop and re-create the index. Since the item references IDX I’m assuming it is indeed an index? If that is the case, how would I drop the index and rebuild it?

I took a snapshot of my VM, used postgresql console and dropped the index. Rebuilt it, and so far in testing I’ve not seen the return of the error. Will keep testing but hoping this is all it was