Shares broken after server migration

Nextcloud version (eg, 20.0.5): 25.0.4
Operating system and version (eg, Ubuntu 20.04): Docker image
Apache or nginx version (eg, Apache 2.4.25): from Docker image
PHP version (eg, 7.4): from Docker image

The issue you are facing:

We heavily use Group Folders.

  1. I migrated a larger Nextcloud instance from a server to a Docker environment.
  2. I copied the database and rsynced the files
  3. I tested it, all appeared to be good.
  4. I copied the database again and rsysnced the files again.
    I missed to set the option to delete files on the target that were deleted on the source system.

Now we have some duplicated and re-appearing files.
But the really bad thing is we have many broken shares.

When I add the same share again, I see that the item_source and file_source values in oc_shares are different.
When patching the new values into the old share, then the share works again.

An example:

id,  item_source,file_source,file_target

I see duplicate entries in oc_filecache:

fileid,path,                                  parent,name,                 size,    mtime,     storage_mtime

I believe the file store was re-indexed, and these are the old and new IDs.
When I change the old for the new value for item_sourceand file_source in oc_share, then the link works again.

Is there way how I can fix this systematically?

Is this the first time you’ve seen this error? (Y):

The output of your Nextcloud log in Admin > Logging:

[no app in context] Warnung: OCP\Files\StorageNotAvailableException: File by id 479574 not found at <<closure>>

 0. /var/www/html/lib/private/Files/Storage/Wrapper/Jail.php line 344
 1. /var/www/html/lib/private/Files/Storage/Wrapper/Wrapper.php line 334
 2. /var/www/html/lib/private/legacy/OC_Helper.php line 521
 3. /var/www/html/apps/files/lib/Helper.php line 50
 4. /var/www/html/apps/files/lib/Controller/AjaxController.php line 46
 5. /var/www/html/lib/private/AppFramework/Http/Dispatcher.php line 225
 6. /var/www/html/lib/private/AppFramework/Http/Dispatcher.php line 133
    OC\AppFramework\Http\Dispatcher->executeController(["OCA\\Files\\Co ... "], "getStorageStats")
 7. /var/www/html/lib/private/AppFramework/App.php line 172
    OC\AppFramework\Http\Dispatcher->dispatch(["OCA\\Files\\Co ... "], "getStorageStats")
 8. /var/www/html/lib/private/Route/Router.php line 298
    OC\AppFramework\App::main("OCA\\Files\\Controller\\AjaxController", "getStorageStats", ["OC\\AppFramewo ... "], ["files.ajax.getStorageStats"])
 9. /var/www/html/lib/base.php line 1047
10. /var/www/html/index.php line 36

GET /apps/files/ajax/getstoragestats?dir=%2FLandesgesch%C3%A4ftsstelle
from by XXXXXXXXXXXXX at 2023-03-15T21:41:22+00:00

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

  'htaccess.RewriteBase' => '/',
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'apps_paths' => 
  array (
    0 => 
    array (
      'path' => '/var/www/html/apps',
      'url' => '/apps',
      'writable' => false,
    1 => 
    array (
      'path' => '/var/www/html/custom_apps',
      'url' => '/custom_apps',
      'writable' => true,
  'memcache.distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' => 
  array (
    'host' => 'nextcloud-redis',
    'password' => 'So0iHiXaiHoofeeL',
    'port' => 6379,
  'overwritehost' => 'XXXXXX',
  'overwriteprotocol' => 'https',
  'overwrite.cli.url' => 'https://XXXXXX',
  'trusted_proxies' => 
  array (
    0 => '',
  'instanceid' => 'XXXXXX',
  'passwordsalt' => 'XXXXXX+XXXXXX',
  'secret' => 'XXXXXX+XXXXXX',
  'trusted_domains' => 
  array (
    0 => 'XXXXXX',
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'mysql',
  'version' => '',
  'dbname' => 'nextcloud',
  'dbhost' => 'nextcloud-db',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'nextcloud',
  'dbpassword' => 'XXXXXX',
  'installed' => true,
  'default_language' => 'de',
  'default_phone_region' => 'de',
  'mail_smtpmode' => 'smtp',
  'mail_sendmailmode' => 'smtp',
  'mail_smtphost' => 'XXXXXX',
  'mail_from_address' => 'cloud',
  'mail_domain' => 'XXXXXX',
  'mail_smtpauthtype' => 'LOGIN',
 'mail_smtpauth' => 1,
  'mail_smtpport' => '587',
  'mail_smtpname' => 'XXXXXX',
  'mail_smtppassword' => 'XXXXXX',
  'mail_smtpsecure' => 'tls',
  'simpleSignUpLink.shown' => false,
  'trashbin_retention_obligation' => 'auto, 10',
  'maintenance' => false,

Thanks for helping!

hello @mklemme welcome to the forum :handshake:

it’s hard to say how the issue manifested. there must have been more moving parts as you description doesn’t really fit the issue.

  • if you have extra files into the fie system (rsync without deleting) this files don’t appear on the system at all… they sit around on the storage and consume space but they don’t appear inside of the application… until you reindex the storage using e.g. occ files:scan command
  • but this would not create duplicate records in the DB oc_filecache table… likely file location is different somehow which results in new file ID
  • “duplicate” files in the oc_filecache table must be different from the application point of view e.g. other mount ID or something else…
  • this is the reason why the share doesn’t work - when the system looks for the “initial” files it doesn’t find it… when you replace the ID of the file in the share it points to right location again…

It’s hard to say how you can recover from the issue. If you can I would recommend you to return to some working backup and start over from there. If you can’t return to a backup try to understand the issue and repair the DB (definitely not the simplest task). You might find scripts and SQL queries from this topic useful (review the whole topic there are tons of different approaches and references) Desktop client 3.4.0 destroys local time stamp and keeps uploading data to server… there was different focus but might help you at least on the files side…

Hallo @wwe
Thanks for responding.

I found out something more:

As said I often find two entries in oc_filecache, an old and new.
All pairs have two different values for the field storage
The field storage in oc_filecache refers to entries in oc_storages.
And here I find for the two values
local::[location on old server] and local::[location on new server]


 {'fileid': 632822, 'storage': 157,  'path_hash': '13482d5aced737348db8a2763864ddda', 'parent': 418674, 'name': '18', 'mimetype': 2, 'mimepart': 1, 'size': 4227163, 'mtime': 1676457686, 'storage_mtime': 1667570529, 'encrypted': 0, 'unencrypted_size': 0, 'etag': '63ecb6d64d90f', 'permissions': 31, 'checksum': ''}
 {'fileid': 764547, 'storage': 321, 'path_hash': '13482d5aced737348db8a2763864ddda', 'parent': 764534, 'name': '18', 'mimetype': 2, 'mimepart': 1, 'size': 11749951, 'mtime': 1678784170, 'storage_mtime': 1676457686, 'encrypted': 0, 'unencrypted_size': 0, 'etag': '641036aa4e368', 'permissions': 31, 'checksum': ''}

That’s surprising for me!
On the old server I have routinely copied the Nextcloud instance to a test website to test new releases.
When we were still using Owncloud I always had to run an SQL script for the test site.

Seeing this I feel confident that I can fix the shares by some SQL updates.

1 Like


I wrote a program following my idea from yesterday that fixes the entries in oc_shareto point to the new entries in oc_filecache.
It worked well, it fixed 460 shares.

I’ll attach it here in case it helps somebody else.
Use at your own risk!

import mysql.connector 


oldStorageID=157  ## old location in oc_storages
newStorageID=321  ## new location in oc_storages

cursor = connection.cursor(dictionary=True, buffered=True)
cursor2 = connection.cursor(dictionary=True, buffered=True)

## iterate over all shares
SELECT, s.item_source, s.file_source, s.file_target, 
  fc.path, fc.path_hash,
FROM oc_share s
 JOIN oc_filecache fc ON (s.item_source = fc.fileid)
result = cursor.fetchall()
for row in result:
    if row["storage"] == oldStorageID: 
        ## this can be fixed in principle
        ## find out if path exists multiple times
    SELECT fileid,,, fc.path, parent, name, size, mtime, storage_mtime
     FROM oc_filecache fc 
     JOIN oc_storages s on ( 
     WHERE path_hash='"""+path_hash+"' AND"+str(newStorageID))    
        result2 = cursor2.fetchall()
        if len(result2) ==0:
            print("  ERROR no new filecache entries found")
            print("oc_share", row)
        if len(result2) >1:
            print("  ERROR too many filecache entries found")
        elif len(result2) ==1:
            print("  XXX")
            print("  new filecache entry", row2)
            print(f"UPDATE oc_share SET item_source={fileid}, file_source={fileid} WHERE id={row['id']}; -- was {item_source}")


hi @mklemme very glad you solved the problem.

compliment for you SQL script. I know basics of SQL and can follow the code but I would never craft such nice solution. The script look very nice and shows high level of SQL proficiency.

This topic was automatically closed after 15 days. New replies are no longer allowed.