could anyone of you give me a quick and simple overview about how nextcloud is using system performance?
At the moment my understanding is that every HTTP-Request to my server will start a new process which will end as soon as the HTTP-Request is answered.
E.g.: I am browsing through my folders. Everytime I click on a folder an HTTP Request is sent to the server to list the folders content. Therefore a process is started which answers the HTTP request.
I have 5 nextcloud instances running on my server. I am watching the linux “top” command for a while now and I can see 5 processes of the user “apache” and command “httpd”. So it looks like that every nextcloud instance has a running process the whole time which is not how I thought about it would be.
So the first question is: Is it correct that every nextcloud instance is running one process?
Let’s go a bit further. If every nextcloud instance is running one process, why should I have more cores (CPU) than nextcloud instances?
Thank you so much for helping me!
you are mixing differnet things.
So therefore the answer to your first question is: No!
And the second: No!!
To go a little into details:
Your webserver (apache httpd) is using a certain amount of threats to answer httpd calls. This can be configured to have a certain amount started - even when idle, a certain amount spare - when calls are coming, and a maximum amount that should not be reached during operation.
These values have the effect that you don’t have to wait, when you connect to the server - more or less.
Your nextcloud is an amount of php scripts that are called by the webserver, so your server starts the interpreter (php) to execute the script and the interpreter gives back the (hopefully) correct answer. Then there is also the database, which populates some aspects of the httpd data given back (html wise). While the database is a service that runs permanently in the background, the nextcloud webpage scripts are only executed when called by the webserver (I do not go into details with services like php caching mechanisms and so on).
The mechanism of accepting the httpd call, looking for a free threat (maybe starting a new spare one), starting the script interpreter, that connects the database, executes the script, hands back the result to the webserver, that sends the result back to the browser/app/webservice/… depends on your system ressources and can not be answered quickly. Maybe it doesn’t make sense to add additional webserver threats, when your connection is slow or you do not have as much user connections/sec… or you should start more threats at the beginning when you have a lot of traffic and starting/stopping threats would slow down the system.
In the end everything is handled by the linux kernel and it takes some RAM, but you can start as much threats as you want (independent from # of cores), because non-working-threats sleep till they are used. There are also some maximum values for most numbers in the game
If you want a deeper understanding of these mechanisms, you have to look into kernel documentations and principles of multi threading operating systems.
To make it a little more confusing: all I said is only true for the apache httpd webserver. nginx - for example - works completely different
So I hope you have something to read and think
thank you so much for your time clarifing things a bit to me. I appreciate it!