Very high CPU usage when I transfer a big file

Nextcloud version 29.0.7
Operating system and version Ubuntu 22.04 server
Apache 2.4.52
PHP version 8.1

My server:
XEON E5-10cores with 16GB ram
running in Proxmox 7.4.3

Hi Everyone
My issue is when I try to upload big file from webui exsample a file around 1-2GB, the cpu always stay on 60-80% usage until transfer done and back to normal. The WebUI will very slow or stop work during this time. Also this big file will show you or need reload the web after seconds or 1mint after transfered.
Anybody has same problem for how I can to do?

I searched from google and someone said don’t deploy everything in one server, just 1 server for web another for database, is that real?

Thanks advance.

No important error in log file.

My config.php:

<?php
$CONFIG = array (
  'instanceid' => 'xxx',
  'passwordsalt' => 'xxx',
  'secret' => 'xxx',
  'trusted_domains' => 
  array (
    0 => '192.168.31.247',
  ),
  'datadirectory' => '/var/www/html/data',
  'dbtype' => 'mysql',
  'version' => '29.0.7.1',
  'overwrite.cli.url' => 'http://192.168.31.247',
  'dbname' => 'nextcloud',
  'dbhost' => 'localhost',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'admin',
  'dbpassword' => 'xxx',
  'installed' => true,
  'proxy' => '192.168.31.248:7890',
  'maintenance_window_start' => 1,
  'preview_max_x' => 512,
  'preview_max_y' => 512,
  'preview_max_scale_factor' => 1,
  'filesystem_check_changes' => 1,
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'distributed' => '\\OC\\Memcache\\Redis',
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'filelocking.enabled' => '0',
  'redis' => 
  array (
    'host' => '/var/run/redis/redis-server.sock',
    'port' => 0,
    'timeout' => 0.0,
  ),
  'default_phone_region' => 'CN',
  'maintenance' => false,
);

And it is the php process that is using the cpu? Or the database?

And php is used as apache module or a fastcgi setup?

1 Like
  1. Check your Nextcloud log for clues about events/errors/etc during the time window of these upload attempts.

  2. Fix the below (just remove the entire line; it’s not a supported configuration and will cause a variety of problems):

'filelocking.enabled' => '0',

  1. Are you using any anti-virus apps in Nextcloud?

I searched from google and someone said don’t deploy everything in one server, just 1 server for web another for database, is that real?

No. At least not until you’re dealing with scaling issues for many users/etc. or if you’re standalone server happened to be dramatically under-powered.

Thanks for your reply:

  1. Sorry, Im not the expert or php or apache, I just follow the tutorail to install, link is Install Nextcloud 29 (Hub 8) on Ubuntu 24.04 - Najigram.com and it also works for me under ubuntu22.04. But I read something from google and it should be php8.1 fastcgi(fpm), bacause I have /etc/php/8.1/fpm folder.

2: I check the system usage for tansfering. cpu 78% usage by apache during the single 1GB file transfering. And cpu 36-50% usage by php-fpm after transfer done between file to show me in the webui (this time 20-30seconds .something processing at backgroup before appear.)

Big thanks for your reply
1: nothing wrong about filetransfer.
2: Did it but unluck.
3: I dont have any antivirus in this system, and ufw was disabled by default in ubuntu.

Any special reason you use this? Normally the data managed in the data folder should only be changed through Nextcloud interfaces (web, clients, webdav, …) and not directly. If you need some external access to some data, it is better to use a dedicated storage for that and include this with external storage.
It is said, that this option has an impact on performance:
https://github.com/nextcloud/server/blob/master/config/config.sample.php#L2175-L2187

I habe tried myself an upload of 1 GB of file. The CPU usage was hard to quantify, nothing constant, there were a few peaks in usage but I don’t know if that was my upload or something else… but like this it is hard to say what exactly uses the cpu.
Logfiles can be an indicator, especially if something goes wrong or awaited results/data are not available or time out. But you checked this part already.

‘filesystem_check_changes’ => 1, Somebody said can help you to to autodetect file changes in smb share folder by this command, Because I set a smb link to an truenas server by 1Gb lan.

I very confuse how the peoples to use nextcloud if the team has 10 or 100 numbers if they are uploading or sync files together. Or may be they have powerful sever like 32cores or 64 cores?

As mentioned before, it normally is not recommended to use this way for performance issues. So this might be a doable thing if you have very few users or you are alone. There is the external storage feature for SMB, and you can use this as primary storage, so that is I suppose the way the larger setups use it. And if you have a large setup with hundreds of users, the enterprise option might be interesting, they have experience setting up Nextcloud into specific existing environments, and they know the different options and all the drawbacks in more detail, as well as scaling the hardware to it.
But there is a chance that someone in the community does exactly what you want to do and will share their experience.

I will try the system without this command, because I really want to deploy the nextcloud server in my office with 50 or more members. another hand, whats the differents if I connect to server by nfs not smb? I googled and tell me nfs will faster and low overhead than smb in linux, I will try it together. Thanks alot again for those informations.

I just upgrade the OS from ubuntu server 22.04 to 24.04 lastest and upgrade php from 8.1fpm to 8.3fpm together. but nothing changed.

also I delete “filesystem_check_change” => 1, and nothing help.

skip smb and nfs folder and just upload to server’s local storage(ssd driver) but nothing help.

Nearest days plan to upgrade the server to Xeon 6138*2 40cores 80threads + 256GB RAM then try again everything. Anybody has new suggest?

Update: as I said, I deploy this nextcloud server in Proxmox and now I gave 16 vcpus to nextcloud VM(before is 10) and test again.Now the cpu usage down to 30-40% during file upload(1-2GB single), so its proved the nextcloud sever for teams need a powerful cpu espically need a lot of cores and threads for file upload for user’s sync.

It still seems a bit high, that is just for a single user upload, there are not 100 users in the background doing stuff?

Often, well adjusted database settings can improve performance a lot (if that is not the case, you have high db usage and/or many i/o wait). If it is just the cpu performance sticking out, if there are not indicators in the logs, it can be a lot, even some apps. Then you can just try to cross-check with certain features/apps turned on/off (and versions in case of bugs), or if you want to go down to the core issue, use debugger tools and see were the cpus spend their time.

Yes, just single user not more. You are correct thats should about DB settings but I dont know how to optimiz, so next step is to research how todo it.

I believe the nextcloud is a fast speed and high performance system because they offer enterprise system for commerce, I just need time to learn.