There are a lot of possibilities, but I’m not going to dig into all of them here.
I’d urge you to reach out to Nextcloud GmbH if this is an enterprise situation (that’s unclear). That’s what they’re there for. 
A few thoughts, queries, and observations:
PostgreSQL + SSL connectivity from Nextcloud seems to work for me:
ncdb=# select datname,usename,ssl,client_addr from pg_stat_ssl join pg_stat_activity on pg_stat_ssl.pid = pg_stat_activity.pid;
datname | usename | ssl | client_addr
---------+------------+-----+-------------
ncdb | ncuser | t | 172.24.0.4
ncdb | oc_ncadmin | t | 172.24.0.4
ncdb | oc_ncadmin | t | 172.24.0.4
ncdb | oc_ncadmin | t | 172.24.0.4
ncdb | oc_ncadmin | t | 172.24.0.4
ncdb | oc_ncadmin | t | 172.24.0.4
Why are you trying to avoid using chunking? Chunking is how you upload large files these days without having to have crazy parameters on the web path and a fragile upload infrastructure. Are you possibly thinking of the sub-optimal interaction with multipart uploads on S3 object storage destinations? NC v26+ changed that. And, in any case, you’d still want to use chunking (which predates S3 optimized multipart support).
Most people that bring up chunking+multipart+S3 uploading seem to be concerned about performance, but it’s unclear what your concerns are on that front.
Have you attempted to use Google Cloud Storage’s S3-compatible API?
What do you mean by “multistorage”? Are you referring to multipart uploads? Multi-region? Or maybe multibuckets?
You also stated:
Volunteers should be interested, because this product is wonderful and implementing it on a large scale in any of the current cloud services would represent a solution for cases like mine where I have so many files. I will try anyway to test it (Kubernets + PGSQL multi instance + Multi Objet store +Redis +imaginary).
It’s not for lack of interest - those deployments exist, but those people are often getting paid or paying others to design/deploy/maintain them.
You may get some responses here, but if so it’ll be out of pure luck (or patience!). In general, most non-enterprise deployments do not need (let alone use) Kubernetes, multiple db servers, multiple object stores, etc. 
If you need suggestions on a specific timeline or level of trust you’ll want to reach out to Nextcloud GmbH (corporate). They have sales people, consultants, and provide access to the core developers for precisely this sort of situation. Well that or you’ll be doing lots of leg work searching around, testing yourself, etc. like most/many of us. 
I would not assume a lack of native Google Cloud Storage API support in NC implies a lack of sizable deployments. They’re out there in both the public cloud and private clouds.
For one, many use cases get by just fine without object storage (but that’s a different discussion). Addiitionally, S3 compatible APIs have sort of become a de facto Object Storage API.
See MinIO, Ceph (often for private cloud/in-house deployments), and all the public cloud providers you named yourself you were testing against as S3 compatible examples. To some extent that coverage has reduced the need for non-S3 API support (but not entirely).
I understand your preference to use Google Cloud if that’s where you have other infrastructure. That doesn’t mean you have to use their native Object Storage. (It also doesn’t mean you have to stick with it for everything, but that’s a deeper discussion.)
I did some quick and dirty tests prompted by your post using Google Cloud Storage’s current S3 API. My multipart uploads (all >2GB) went fine from both NC v25 and v26 test installations. Keep in mind I didn’t look too closely as I was just being curious!
I’d encourage you to do some further testing against S3-compatible APIs @ Google (since that seems promising - particularly since they appear to have added more S3 compatible multipart upload support in late 2021 so they’re might even be some additional optimizations in there if it turns out to truly be compatible with AWS’s in that area).
If for some reason that doesn’t work out, another thought that pops into mind is using MinIO within Google’s infrastructure (MinIO | MinIO for Google Kubernetes Engine), but if you can use Google’s own S3 API that’d be even better to avoid the hassle of course.
Again, reach out to Nextcloud GmbH if this is an enterprise situation. And, if not, well I suspect you’ll have some notes to add to the community about your adventures.
Good luck!