Multiple instances side-by-side using the same PHP throw error

I’m running on a simple machine without any virtualization two Nextcloud instances.
(One for personal use, and one for a small assembly where I’m involved.)
Both instances run by now 26.0.1. They get both accessed via NGINX, and they share the same MariaDB-Server. So far so good. Lateley I upgraded both instances (to 26.0.1) and got some new PHP (7 → 8).

Now, when i’m using the same PHP interpreter (and fpm service) for both instances, i get the following error for one of the instances:

2023/05/25 14:54:03 [error] 3932691#3932691: *10093477 FastCGI sent in stderr: “PHP message: PHP Fatal error: Cannot declare class ComposerAutoloaderInit749edc17b96de01b94b93500429fcbbc, because the name is already in use in /path/to/the/other/nextcloud/apps/maps/vendor/composer/autoload_real.php on line 5” while reading response header from upstream, client: 192.168.0.1, server: cloud.rhizomatic.at, request: “GET / HTTP/2.0”, upstream: “fastcgi://unix:/var/run/php/php8.1-fpm.sock:”, host: “cloud.domain.bla”

My hack is right now, that I use php8.1 for the one, and php8.2 for the second instance. Then both are working.
But this doesn’t seem like the right long term solution :slight_smile:

Is this a bug, or are there any config parameters to make a shared-hosting possible again?
The setup worked well before, when I used php7.3 and older nextcloud versions.

It looks that you are trying to use the same php-fpm pool for both instances; on Debian each php-fpm has different path for each php-version, so while having two php versions, you have two fpm with different configs, most likely:
/etc/php/8.0/fpm/pool.d/www.conf
/etc/php/8.1/fpm/pool.d/www.conf
And when you are trying to use same interpeter for both instances, they both are using the same fpm-pool. So create another pool with different name (ie [www2]), different socket path/port and make webserver to use different pools (sockets/ports) for different vhosts (fastcgi_pass)

This sounds very likely - thanks for describing the situation!
I will give an update when configured it successfully