Ocs/v2.php/apps/serverinfo/api/v1/info takes 30-50 seconds to execute

Nextcloud version: 18.0.4
Operating system and version: debian 10 (buster)
Apache or nginx version: nginx 1.14.2-2+deb10u1
PHP version: 7.3

The issue you are facing:

as per the subject, the serverinfo api endpoint takes 30-50 seconds to execute, it took ~7 when I first started using it, here it took 30:

$ time curl --user netdata http://localhost/nc/ocs/v2.php/apps/serverinfo/api/v1/info?format=json
Enter host password for user 'netdata':
{"ocs":{"meta":{"status":"ok","statuscode":200,"message":"OK"},"data":{"nextcloud":{"system":{"version":"18.0.4.2","theme":"","enable_avatars":"yes","enable_previews":"yes","memcache.local":"\\OC\\Memcache\\Redis","memcache.distributed":"\\OC\\Memcache\\Redis","filelocking.enabled":"yes","memcache.locking":"\\OC\\Memcache\\Redis","debug":"no","freespace":286568873984,"cpuload":[0.67,0.59,0.5],"mem_total":8169572,"mem_free":5268740,"swap_total":0,"swap_free":0,"apps":{"num_installed":50,"num_updates_available":0,"app_updates":[]}},"storage":{"num_users":75,"num_files":21194502,"num_storages":699,"num_storages_local":1,"num_storages_home":631,"num_storages_other":67},"shares":{"num_shares":156,"num_shares_user":39,"num_shares_groups":0,"num_shares_link":87,"num_shares_mail":2,"num_shares_room":13,"num_shares_link_no_password":87,"num_fed_shares_sent":0,"num_fed_shares_received":0,"permissions_10_19":12,"permissions_10_31":1,"permissions_3_4":2,"permissions_0_15":3,"permissions_0_31":28,"permissions_3_31":1,"permissions_0_19":5,"permissions_0_1":3,"permissions_4_1":2,"permissions_3_1":64,"permissions_11_0":15,"permissions_3_15":6,"permissions_3_3":14}},"server":{"webserver":"nginx\/1.14.2","php":{"version":"7.3.14","memory_limit":536870912,"max_execution_time":3600,"upload_max_filesize":1048576000},"database":{"type":"pgsql","version":"PostgreSQL 11.7 (Debian 11.7-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit","size":18460082847}},"activeUsers":{"last5minutes":6,"last1hour":28,"last24hours":45}}}}
real    0m30,890s
user    0m0,018s
sys     0m0,000s

The server seems to be otherwise responsive.
Note that the huge number of files is due to 3 external smb storages, I think that’s what caused the problem, but now I removed those storages and the number of files didn’t change.

If I look at the netdata metrics, I see that postgresql is returning more than 10000000 (ten millions) tuples at the time of that query (btw, I saw spikes of 100 millions tuples at other times).

Note that those external storages have around 3 millions files, so I don’t know where those 21 millions come from.
Maybe nextcloud is indexing the same file for each user?
Also I don’t understand why it reports num_storages:699, num_storages_home:631 and num_storages_other:67, I only had 3 external storages (all of them removed at the time of the query).

The serverinfo api is back to its normal 5 seconds response time, but it still reports 21 million files and 699 storages.

I think you can refer to Files amount after moving of data directory is wrong (much bigger) have had similar issue and need to delete unused externals and internals.

1 Like

Thank you , it’s definitely that but it’s not clear to me how to recover. In my case I have just one local storage, a bunch of “home” storage and the same “smb” storages repeated for each user. I’m tempted to file a bug report since I did nothing outside nextcloud to cause this issue.
Edit a funny (or not so funny) detail: all the smb storages have a “1” in the “available” field, in spite of having deleted them from the nextcloud interface.

Recover is in a few steps.

Step 1. Find out what is wrong in DB by listening all shares, data folders etc. You should execute command in shell to your DB, e.g. mysql/mariaDB:

You will see a table and should identify what are wrong storage’s there.

Step 2.

Step 3.

the ID is in a second raw in table from Step 1.

Step 4. Do some DB cleanup:

Step 5. Run FS rescan, for this use command:
sudo -u www-data php occ files:cleanup

You can see on my graphs in older treat that this operation will take a while…

I’m doing the occ files:cleanup now, but I had to exit maintenance mode, otherwise occ would tell me that there’s no files namespace.

That’s for sure, sorry forgot to mentioned that.