After migration to NC 12.0.4: very high CPU load

Hello,

today I upgraded from oc 8.2.11 via the needed migration steps to NC 12.04.
I tested the migration several times in a sandbox with a clone of the production system - everything went smooth (Though I could not simulate the real thing in terms of > 10 users syncing at the same time…)

Now, after the upgrade of the production system, the load of the mysqld (mariadb) is around 300% all the time.

I already had a look into these posts:

and tried to follow their advice.

I have memory caching enabled via:
‘memcache.local’ => ‘\OC\Memcache\APCu’,

Concerning the 2nd link I’d like to know more details on how to find out what files may cause trouble.

I run nc on ubuntu 14.04 with php 5.6 via an additional repo.
phpinfo returns pdo support for mysql is enabled:

PDO support enabled
PDO Driver for MySQL enabled

Right now I have around 20 open client connections which would probably go up to 50
but not much more…

Any help would be highly appreciated.

Thank you
Andreas

Are you seeing anything telling in the logs? Nextcloud, Apache, SQL… anything?

Hi,

right now the situation has further stabilized.
The server runs normally I’d say.
Mysqld utilizes only 10-20% CPU.

The mysql and apache logs are pretty clean.

The nextcloud log has some of those:

Sabre\\DAV\\Server->invokeMethod(Object(Sabre\\HTTP\\Request), Object(Sabre\\HTTP\\Response))\n#4 \/var\/www\/owncloud\/remote.php(70): Sabre\\DAV\\Server->exec()\n#5 \/var\/www\/owncloud\/remote.php(165): handleException(Object(OC\\ServerNotAvailableException))\n#6 {main}",“File”:"\/var\/www\/owncloud\/remote.php"

Thanks

Hi,

I need to come back to this issue unfortunately, since it has come up again.
The CPU utilization of mysqld goes up to 500% and goes down to normal
again within minutes - it’s sort of “pulsing”.

Once the load is high, I can always see a bunch (5 - 10) of the following processes in mysql:

| 32970 | owncloud | localhost | owncloud | Query   |    7 | Sending data | SELECT SUM(`size`) AS f1, MIN(`size`) AS f2 FROM `oc_filecache` WHERE `parent` = '3937827' AND `storage` = '302' |    0.000 |
| 32971 | owncloud | localhost | owncloud | Query   |    7 | Sending data | SELECT SUM(`size`) AS f1, MIN(`size`) AS f2 FROM `oc_filecache` WHERE `parent` = '3937827' AND `storage` = '302' |    0.000 |
| 32972 | owncloud | localhost | owncloud | Query   |    7 | Sending data | SELECT SUM(`size`) AS f1, MIN(`size`) AS f2 FROM `oc_filecache` WHERE `parent` = '3937827' AND `storage` = '302' |    0.000 |
| 32973 | owncloud | localhost | owncloud | Query   |    7 | Sending data | SELECT SUM(`size`) AS f1, MIN(`size`) AS f2 FROM `oc_filecache` WHERE `parent` = '3937827' AND `storage` = '302' |    0.000 |

After around a minute those are gone and the CPU load goes back to normal again.

I observed the effect on Friday, but over the weekend the server behaved totally fine,
Today “pulsing” started again, so I suspect a certain client is causing the trouble.

Is there a way to dig deeper into the mysql processes and figure out if there is a certain user causing this?
I was wandering whether “parent = 3937827” might give a clue, since this is the one showing
up in the process list again and again…


  • Memory caching is set to
    ’memcache.local’ => ‘\OC\Memcache\APCu’,

  • The mysql logfile looks shiny clean.

  • The number of established https connections varies between 10-30 nothing super heavy…

The web-frontend and clients behave normally I just want to make sure,
I have a “clean” back-end…

Thank you very much
Andreas