Very bad performance which is dropping rapidly on an odroid n2+ box

Support intro

Sorry to hear you’re facing problems :slightly_frowning_face:

help.nextcloud.com is for home/non-enterprise users. If you’re running a business, paid support can be accessed via portal.nextcloud.com where we can ensure your business keeps running smoothly.

In order to help you as quickly as possible, before clicking Create Topic please provide as much of the below as you can. Feel free to use a pastebin service for logs, otherwise either indent short log examples with four spaces:

example

Or for longer, use three backticks above and below the code snippet:

longer
example
here

Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can :heart:

Nextcloud version (eg, 20.0.5): 29.0.0.19
Operating system and version (eg, Ubuntu 20.04): Debian GNU/Linux 12 (bookworm)
Apache or nginx version (eg, Apache 2.4.25): 2.4.57
PHP version (eg, 7.4): 8.2.7
MariaDB Version: 10.11.6

The issue you are facing:
The performance of my nextcloud installation constantly drops over the time.
After restarting the nextcloud instance it takes several minutens

Is this the first time you’ve seen this error? (Y/N): No

Steps to replicate it:

Just login, and try to access the personal files.
This takes minutes. Every change into a new directories takes minutes

The output of your Nextcloud log in Admin > Logging:

n.a.
The output of your config.php file in `/path/to/nextcloud`:
<details><summary>Configuration (config/config.php)</summary>

{
“passwordsalt”: “REMOVED SENSITIVE VALUE”,
“secret”: “REMOVED SENSITIVE VALUE”,
“trusted_domains”: [
“localhost”,
“xxx”,
xxx.duckdns.org”,
www.xxx.duckdns.org
],
“datadirectory”: “REMOVED SENSITIVE VALUE”,
“dbtype”: “mysql”,
“version”: “29.0.0.19”,
“overwrite.cli.url”: “http://localhost”,
“dbname”: “REMOVED SENSITIVE VALUE”,
“dbhost”: “REMOVED SENSITIVE VALUE”,
“dbport”: “”,
“dbtableprefix”: “oc_”,
“mysql.utf8mb4”: true,
“dbuser”: “REMOVED SENSITIVE VALUE”,
“dbpassword”: “REMOVED SENSITIVE VALUE”,
“default_phone_region”: “DE”,
“installed”: true,
“instanceid”: “REMOVED SENSITIVE VALUE”,
“mail_from_address”: “REMOVED SENSITIVE VALUE”,
“mail_smtpmode”: “smtp”,
“mail_domain”: “REMOVED SENSITIVE VALUE”,
“mail_smtpsecure”: “tls”,
“mail_smtpauth”: “1”,
“mail_smtphost”: “REMOVED SENSITIVE VALUE”,
“mail_smtpport”: “587”,
“mail_smtpauthtype”: “LOGIN”,
“mail_smtpname”: “REMOVED SENSITIVE VALUE”,
“mail_smtppassword”: “REMOVED SENSITIVE VALUE”,
“mail_sendmailmode”: “smtp”,
“maintenance”: false,
“theme”: “”,
“trashbin_retention_obligation”: “disabled”,
“loglevel”: 1,
“bulkupload.enabled”: false,
“app_install_overwrite”: [
“files_markdown”,
“files_readmemd”
],
“overwritehost”: “xxx”,
“filelocking.enabled”: true,
“memcache.locking”: “\OC\Memcache\Redis”,
“memcache.distributed”: “\OC\Memcache\Redis”,
“redis”: {
“host”: “REMOVED SENSITIVE VALUE”,
“port”: 6379
},
“updater.release.channel”: “stable”,
“data-fingerprint”: “da0595ab48a9ae6c7cb4114638c1f985”
}
Cron Configuration: Array
(
[backgroundjobs_mode] => cron
[lastcron] => 1715886309
)

External storages: files_external is disabled

Encryption: no

User-backends:

  • OC\User\Database

Browser: unknown

The output of your Apache/nginx/system log in `/var/log/____`:
nothing really important stuff:
[Thu May 16 20:06:30.305702 2024] [security2:notice] [pid 3473244] ModSecurity: APR compiled version="1.7.2"; loaded version="1.7.2"
[Thu May 16 20:06:30.305723 2024] [security2:notice] [pid 3473244] ModSecurity: PCRE2 compiled version="10.42 "; loaded version="10.42 2022-12-11"
[Thu May 16 20:06:30.305731 2024] [security2:notice] [pid 3473244] ModSecurity: LUA compiled version="Lua 5.1"
[Thu May 16 20:06:30.305737 2024] [security2:notice] [pid 3473244] ModSecurity: YAJL compiled version="2.1.0"
[Thu May 16 20:06:30.305742 2024] [security2:notice] [pid 3473244] ModSecurity: LIBXML compiled version="2.9.14"
[Thu May 16 20:06:30.305748 2024] [security2:notice] [pid 3473244] ModSecurity: Status engine is currently disabled, enable it by set SecStatusEngine to On.
[Thu May 16 20:06:30.606549 2024] [mpm_prefork:notice] [pid 3473245] AH00163: Apache/2.4.57 (Debian) OpenSSL/3.0.11 configured -- resuming normal operations
[Thu May 16 20:06:30.606726 2024] [core:notice] [pid 3473245] AH00094: Command line: '/usr/sbin/apache2'
[Thu May 16 21:05:45.432590 2024] [php:error] [pid 3475509] [client 2a0a:51c0::344:37168] script '/var/www/html/index.php' not found or unable to stat
[Thu May 16 21:05:45.454555 2024] [php:error] [pid 3473899] [client 127.0.0.1:56778] script '/var/www/html/index.php' not found or unable to stat
[Thu May 16 21:05:45.461920 2024] [php:error] [pid 3477922] [client 127.0.0.1:60560] script '/var/www/html/index.php' not found or unable to stat
[Thu May 16 21:05:48.918977 2024] [php:error] [pid 3473899] [client 127.0.0.1:56778] script '/var/www/html/index.php' not found or unable to stat
[Thu May 16 21:05:50.832912 2024] [php:error] [pid 3473899] [client 2a0a:51c0::344:37192] script '/var/www/html/index.php' not found or unable to stat

The problem looks like to be database related (very high mariadb CPU consumption. Therefore I already had implemente the /var/log/mysql/slow.log creation.

I immediately find entries in this file, which point the oc_filecache table to have a problem.

Entry from 2024-05-07:

SET timestamp=1708814831;
SELECT `filecache`.`fileid`, `storage`, `path`, `path_hash`, `filecache`.`parent`, `filecache`.`name`, `mimetype`, `mimepart`, `size`, `mtime`, `storage_mtime`, `encrypted`, `etag`, `filecache`.`permissions`, `checksum`, `unencrypted_size`
, `metadata_etag`, `creation_time`, `upload_time`, `meta`.`json` AS `meta_json`, `meta`.`sync_token` AS `meta_sync_token` FROM `oc_filecache` `filecache` LEFT JOIN `oc_filecache_extended` `fe` ON `filecache`.`fileid` = `fe`.`fileid` LEFT J
OIN `oc_files_metadata` `meta` ON `filecache`.`fileid` = `meta`.`file_id` WHERE (`storage` = 3) AND (`path_hash` = 'c109b1ecbd802e77ebdee63921111d2e');
# User@Host: nextcl[nextcl] @ localhost []
# Thread_id: 3  Schema: nextclouddb  QC_hit: No
# Query_time: 1.651518  Lock_time: 0.000303  Rows_sent: 0  Rows_examined: 350292
# Rows_affected: 8  Bytes_sent: 52

Entry of today:

SET timestamp=1715119670;
SELECT `filecache`.`fileid`, `storage`, `path`, `path_hash`, `filecache`.`parent`, `filecache`.`name`, `mimetype`, `mimepart`, `size`, `mtime`, `storage_mtime`, `encrypted`, `etag`, `filecache`.`permissions`, `checksum`, `unencrypted_size`
, `metadata_etag`, `creation_time`, `upload_time`, `meta`.`json` AS `meta_json`, `meta`.`sync_token` AS `meta_sync_token` FROM `oc_filecache` `filecache` LEFT JOIN `oc_filecache_extended` `fe` ON `filecache`.`fileid` = `fe`.`fileid` LEFT J
OIN `oc_files_metadata` `meta` ON `filecache`.`fileid` = `meta`.`file_id` WHERE (`storage` = 4) AND (`path_hash` = 'b866edaf6070451571f3e7bf849690f2');
# User@Host: nextcl[nextcl] @ localhost []
# Thread_id: 13673  Schema: nextclouddb  QC_hit: No
# Query_time: 1.135117  Lock_time: 0.000236  Rows_sent: 0  Rows_examined: 419399
# Rows_affected: 0  Bytes_sent: 1603

SET timestamp=1715119675;
SELECT `filecache`.`fileid`, `storage`, `path`, `path_hash`, `filecache`.`parent`, `filecache`.`name`, `mimetype`, `mimepart`, `size`, `mtime`, `storage_mtime`, `encrypted`, `etag`, `filecache`.`permissions`, `checksum`, `unencrypted_size`
, `metadata_etag`, `creation_time`, `upload_time`, `meta`.`json` AS `meta_json`, `meta`.`sync_token` AS `meta_sync_token` FROM `oc_filecache` `filecache` LEFT JOIN `oc_filecache_extended` `fe` ON `filecache`.`fileid` = `fe`.`fileid` LEFT J
OIN `oc_files_metadata` `meta` ON `filecache`.`fileid` = `meta`.`file_id` WHERE (`storage` = 3) AND (`path_hash` = '45b963397aa40d4a0063e0d85e4fe7a1');
# User@Host: nextcl[nextcl] @ localhost []
# Thread_id: 13673  Schema: nextclouddb  QC_hit: No
# Query_time: 1.156306  Lock_time: 0.000133  Rows_sent: 0  Rows_examined: 419404
# Rows_affected: 0  Bytes_sent: 1603

and some more hours later:

SET timestamp=1715888028;
SELECT `filecache`.`fileid`, `storage`, `path`, `path_hash`, `filecache`.`parent`, `filecache`.`name`, `mimetype`, `mimepart`, `size`, `mtime`, `storage_mtime`, `encrypted`, `etag`, `filecache`.`permissions`, `checksum`, `unencrypted_size`, `metadata_etag`, `creation_time`, `upload_time`, `meta`.`json` AS `meta_json`, `meta`.`sync_token` AS `meta_sync_token` FROM `oc_filecache` `filecache` LEFT JOIN `oc_filecache_extended` `fe` ON `filecache`.`fileid` = `fe`.`fileid` LEFT JOIN `oc_files_metadata` `meta` ON `filecache`.`fileid` = `meta`.`file_id` WHERE (`storage` = 4) AND (`path_hash` = '878e37553ea1b409fe52dc3e4b4e650a');
# Query_time: 1.550049  Lock_time: 0.000216  Rows_sent: 1  Rows_examined: 644995
# Rows_affected: 0  Bytes_sent: 1766

I did run all possible occ db: no success, no missing indices where reported and/or created.

Some other interesting findings:

root@7vxxx:/var/www/nextcloud# sudo -u www-data php occ files:scan --all
Starting scan for user 1 out of 4 (admin)
Starting scan for user 2 out of 4 (Books)
Starting scan for user 3 out of 4 (Gisela)
Starting scan for user 4 out of 4 (peter)
+---------+--------+-----+---------+---------+--------+--------------+
| Folders | Files  | New | Updated | Removed | Errors | Elapsed time |
+---------+--------+-----+---------+---------+--------+--------------+
| 64213   | 233178 | 0   | 0       | 0       | 0      | 00:08:51     |
+---------+--------+-----+---------+---------+--------+--------------+
MariaDB [nextclouddb]> select * from oc_filecache order by fileid desc limit 10;
+--------+---------+---------------------------------------------------+----------------------------------+--------+--------+----------+----------+------+------------+---------------+-----------+------------------+---------------+-------------+----------+
| fileid | storage | path                                              | path_hash                        | parent | name   | mimetype | mimepart | size | mtime      | storage_mtime | encrypted | unencrypted_size | etag          | permissions | checksum |
+--------+---------+---------------------------------------------------+----------------------------------+--------+--------+----------+----------+------+------------+---------------+-----------+------------------+---------------+-------------+----------+
| 680003 |       4 | appdata_ocnn67i0vw59/preview/f/2/9/2/0/e/1/162833 | d5d8f4cf96af2b603f31bfe45a0ca4f0 | 680002 | 162833 |        2 |        1 |    0 | 1715888263 |    1715888263 |         0 |                0 | 664660873dfdc |          31 |          |
| 680002 |       4 | appdata_ocnn67i0vw59/preview/f/2/9/2/0/e/1        | 0740480632e94a1d36ba0811c0614d79 | 680001 | 1      |        2 |        1 |   -1 | 1715888263 |    1715888263 |         0 |                0 | 66466088f3058 |          31 |          |
| 680001 |       4 | appdata_ocnn67i0vw59/preview/f/2/9/2/0/e          | 941a45b8d773ba934f6cba2c4498f66e | 680000 | e      |        2 |        1 |   -1 | 1715888263 |    1715888263 |         0 |                0 | 6646608a87ac2 |          31 |          |
| 680000 |       4 | appdata_ocnn67i0vw59/preview/f/2/9/2/0            | be25f8658888f31b1b9f3a8f2ed8e904 | 679999 | 0      |        2 |        1 |   -1 | 1715888263 |    1715888263 |         0 |                0 | 6646608c229f0 |          31 |          |
| 679999 |       4 | appdata_ocnn67i0vw59/preview/f/2/9/2              | a8c8145f415e5a45a0f14fc295fc16f0 | 310789 | 2      |        2 |        1 |   -1 | 1715888263 |    1715888263 |         0 |                0 | 6646608da7679 |          31 |          |
| 679998 |       4 | appdata_ocnn67i0vw59/preview/9/6/a/a/a/1/c/162832 | 574b52618bbe7d99c288ef1d991badef | 679997 | 162832 |        2 |        1 |    0 | 1715888238 |    1715888238 |         0 |                0 | 6646606ec83ef |          31 |          |
| 679997 |       4 | appdata_ocnn67i0vw59/preview/9/6/a/a/a/1/c        | 42c5587e5c5d1936c8835e27fac41251 | 679996 | c      |        2 |        1 |   -1 | 1715888238 |    1715888238 |         0 |                0 | 664660706a76f |          31 |          |
| 679996 |       4 | appdata_ocnn67i0vw59/preview/9/6/a/a/a/1          | 1c8148fc5a73bb00c4f716cd20268c79 | 679995 | 1      |        2 |        1 |   -1 | 1715888238 |    1715888238 |         0 |                0 | 6646607218e07 |          31 |          |
| 679995 |       4 | appdata_ocnn67i0vw59/preview/9/6/a/a/a            | 9b6258f0a88ad1cf7cb727b67e384c22 | 367913 | a      |        2 |        1 |   -1 | 1715888238 |    1715888238 |         0 |                0 | 66466073a6d3d |          31 |          |
| 679994 |       4 | appdata_ocnn67i0vw59/preview/f/a/a/1/5/f/f/162826 | d64c8be09de81d9ed8394c1527ff9aff | 679993 | 162826 |        2 |        1 |    0 | 1715888216 |    1715888216 |         0 |                0 | 66466058a86ab |          31 |          |
+--------+---------+---------------------------------------------------+----------------------------------+--------+--------+----------+----------+------+------------+---------------+-----------+------------------+---------------+-------------+----------+
10 rows in set (0,001 sec)
MariaDB [nextclouddb]> system date
Do 16. Mai 21:39:45 CEST 2024
MariaDB [nextclouddb]> select count(*) from oc_filecache where path like 'appdata_ocnn67i0vw59/preview%';
+----------+
| count(*) |
+----------+
|   347139 |
+----------+
1 row in set (1,912 sec)

MariaDB [nextclouddb]> system date
Do 16. Mai 21:40:02 CEST 2024
MariaDB [nextclouddb]> select count(*) from oc_filecache where path like 'appdata_ocnn67i0vw59/preview%';
+----------+
| count(*) |
+----------+
|   347143 |
+----------+
1 row in set (1,183 sec)

MariaDB [nextclouddb]> 

It looks very much that the preview generation of for pictures is blowing up the oc_filecache in a way, that it’s no longer performing well.

The performance of this nextcloud instance has reached a status very close to no longer usefull.

Do you have an idea, what is happening to my nextcloud instance?
How can I fix it.?

Are you currently using previewgenerator? Or was your system actively in use when you pulled the count() on the previews incrementing like that (i.e. some of your account holders accessing their files)?

Just trying to get an explanation for why previews are being generated so regularly. Then maybe we can backtrack possibly root causes from there.

What’s the history of this installation? It been around for a while or brand new?

Hi jtr,

thanks for answering!

History: first installed owncloud in 2016 on RPI 2, then switched to nextcloud by just switching the nextcloud directory content. The installation was multiply ported to another hardware (RPI3, RPI4 32 Bit, RPI4 64 bit and last to odroid n2+). I cannot reproduce how the single migrationsteps were performed, I don’t document the single steps.

Because I already played around with all potential helpers of occ (especially occ preview:repair, trashbin:*, files:scan and files:scan-app-data) it is possible that I was using the preview generator.

Perhabs worth to mention: the data directory was moved for disk space reasons from /var/www/nextcloud/data to a different disk (/mnt/disks/nextcloud-data/data) some monts ago.

How did you do that, did you follow the tutorial?

(if you didn’t, there might be the old file path at a few locations, and then it might look for stuff that does not exist).

For the database, cache settings are important as well. Did you do anything in this respect:
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html#configuring-a-mysql-or-mariadb-database
(there are tools like mysqltuner, tuning-primer.sh that give tips regarding your specific use case).

Not sure if you have a lot of RAM, if you can cache many things, it should become much faster.
If a disk becomes slow, for long running setups, you also might consider that the disk has a problem (smartmontools).

well, after some more time the situation escalated further:

# Time: 240624 18:25:51
# User@Host: nextcl[nextcl] @ localhost []
# Thread_id: 2328  Schema: nextclouddb  QC_hit: No
# Query_time: 2.600177  Lock_time: 0.000251  Rows_sent: 1  Rows_examined: 851071
# Rows_affected: 0  Bytes_sent: 1753
SET timestamp=1719246351;
SELECT `filecache`.`fileid`, `storage`, `path`, `path_hash`, `filecache`.`parent`, `filecache`.`name`, `mimetype`, `mimepart`, `size`, `mtime`, `storage_mtime`, `encrypted`, `e
tag`, `filecache`.`permissions`, `checksum`, `unencrypted_size`, `metadata_etag`, `creation_time`, `upload_time`, `meta`.`json` AS `meta_json`, `meta`.`sync_token` AS `meta_syn
c_token` FROM `oc_filecache` `filecache` LEFT JOIN `oc_filecache_extended` `fe` ON `filecache`.`fileid` = `fe`.`fileid` LEFT JOIN `oc_files_metadata` `meta` ON `filecache`.`fil
eid` = `meta`.`file_id` WHERE (`storage` = 4) AND (`path_hash` = 'c0a3103d070b3ab55958bee6284f6c3f');

when browsing the instance via the WEB-Interface I had to wait minutes until the content of the next directory shows up in the Browser.

Because I already had set

‘enable_previews’ => false,

in config.php (including running all possible occ commands after that, which may cleanup previews), still no success. Because the instance was no longer working properly I decided to go for:

  • put the instance in maintenance mode
  • delete all entries in oc_filecache where the path contains ‘appdata_ocnn67i0vw59/preview’ via SQL
  • delete all files and directories in the file system under ‘/data/appdata_ocnn67i0vw59/preview’

After these operations the number of records in oc_filecache (count(*)) was 242606 ! The performance in the WEB-Interface very good (as expected).
But I immediately could see that there where still entries created below appdata_xxxx/preview directory and these entries are also created in oc_filecache.

Today, one week later, the performance is very bad again, the number of entries in appdata_xxxx/preview and oc_filecache have grown a lot:

MariaDB [nextclouddb]> select count(*) from oc_filecache where path like 'appdata%/preview%';
+----------+
| count(*) |
+----------+
|   271564 |
+----------+
1 row in set (1,375 sec)

MariaDB [nextclouddb]> select count(*) from oc_filecache;
+----------+
| count(*) |
+----------+
|   569711 |
+----------+
1 row in set (0,359 sec)

MariaDB [nextclouddb]> 

It looks very much like there are no preview-files generated (I could not find any .jpeg, .jpg, or .png below appdata_xxxxx/preview).

But the directories where those preview files would typically be stored in are created, including the correspondig entries in the oc_filecache table.

This brings the mariadb instance on the RPI4 up to 100% CPU usage and breaks my performance totally.

When you go to Administration settings-Overview do you by chance have warnings about missing indices? There are something like half a dozen indices used for the filecache table.

occ db:add-missing-indexes was already executed (several times);

mysql> show indexes from oc_filecache shows 9 keys (including primary)