Frustration with NextCloud

I must say that I have loved the product, I have seen it as a great service to implement in my company, I have never had a good document manager and NextCloud could be what I was looking for, since I have the misfortune of publishing 270TB of historical files.

I mention the inconveniences I had when installing it.

  • My infrastructure is in GCP, Nextcloud is not compatible with Datastore in GCP. (GCSfuse is very unstable).
  • It is not possible to join a database with mandatory SSL, in my case postgresql (add the certificate to the connection, it doesn’t work)
  • The language translation of plugins is poor and complex to translate.
    -No matter how much I adapt apache or nginx it is impossible to upload files bigger than 2GB (I have files up to 25GB) with systems like rclone, without having to chunk even with datastore in a service like amazon or digitalocean.

Some advice not to give up.

I have no problems with large files.

It is done in php.ini of the php-sapi your server is using (e.g. php-fpm):

max_execution_time = 3600
max_input_time = -1
memory_limit = 1G
post_max_size = 0
upload_max_filesize = 64G

(post_max_size = 0
means no limit)

You can switch on/off/scale chunking and experiment what is best for you. (For me it works the best when it is disabled with --value 0)

Good luck

1 Like

what is GCP? for me it sounds you blame the wrong product… Each application comes with different prerequisites like supported OS and DBs… in case of free software you may install it on every system but you can’t expect it work properly…

270TB of data sounds “important enough” for me so you can afford a supported system and see if you achieve better results…

core application is translated well into common languages… Regarding community apps - this is the price of extensibility if you can’t live without this additional apps either help translation or support translators in other way e.g. financially…

GCP is GOOGLE CLOUD PLATFORM, Nextcloud only works with object store with amazon or microsoft azure, this can be replaced with wasabi or digitalocean, I wanted to try Nextcloud as an alternative to mitigate the impact of file consumption of my users and leave it only as an archiving alternative.

I’ve been in IT for almost 30 years, So I have the experience to build a scalable Nextcloud model, on Kubernetes, Postgresql and datastore for the 4500 users that would get to query these 270TB that I have to upload.

But Nextcloud can’t use a postgresql with SSL, and I have to go out and find a datastore that works.

I agree with you that the product is well translated (I have corrected some things in my native language) but the plugins are not and the translation system is not easy, that’s all. in fact I don’t care, I just wanted to put it on the list as part of my problems.

I chose nextcloud because neither GOOGLE DRIVE, nor DROPBOX and much less Microsoft, or others on the market can help me. years ago I used Owncloud. But if you know a product that I can consult the 270 TB share and view online, I’m listening.

I have found that with Nextcloud I can offer all this unstructured data in a simple way. With nextcloud, but if it is not the product, I can go my way and keep looking.

Thank you very much for reading the post. Good bye.

Are you sure? MySQL over SSL is supported.

You could always tunnel it over WireGuard as an easy alternative.

I have not used wireguard, I am afraid to use mysql for so many users and or so many files, I know that with DBO parameters I can connect to MYSQL with SSL, but I trust more in Postgresql for a project like these, I have not used Wireguard, but I will take a look, thank you very much!!!

The volunteers on this forum basically all have zero interest in all of these platforms, which makes it difficult to offer advice on something like Google cloud. Comes with this niche perhaps. Checkout

30 years of IT experience or not, I think it is very optimistic to plan an installation of this size without consulting someone with the appropriate experience of the specific product you’re planing to install. Because every product has its own peculiarities and pitfalls. I for myself do not have this experience, and neither do most of the users in this forum.

However, It is definitely possible to run such large and even larger installations, but perhaps not in exactly the way you are planning to implement it. And that’s where consulting from Nextcloud GmbH or an enterprise contract could come in handy… Enterprise - Nextcloud


Volunteers should be interested, because this product is wonderful and implementing it on a large scale in any of the current cloud services would represent a solution for cases like mine where I have so many files. I will try anyway to test it (Kubernets + PGSQL multi instance + Multi Objet store +Redis +imaginary ). Rclone has also not helped me to transfer the data to NextCloud via Webdav even modifying PHP-FPM,Nginx, Memory, Disk space, and chunk options although they work. For the user they are not an alternative.

Thanks, I’m sorry you are confusing the issue of my experience, I was just saying that because I have implemented large scale cloud services.

If I consult the forum is precisely to find an expert who has implemented large scale service in cloud models.

I will try to get in touch with enterprise options. Although being a challenge maybe if I succeed I can document it and post it in this same forum.

1 Like

The size of your installation is definitely better supported by enterprise support - unlikely to find many people here with experience running such huge systems.

regarding GCP support I’m really surprised Google doesn’t offer S3-compatible API… but at looks you can use S3 API to access Google’s storage with very small adoptions: Simple migration from Amazon S3 to Cloud Storage  |  Google Cloud

Thanks!!! I think I had seen this google guide, but on github they warned that multistorage options were not possible with google only simple storage (single storage space), but, I will try it, again thanks for the link.
If everything works for me I will make a manual of everything that worked for me. If not I will go on my way, thanks to all.

There are a lot of possibilities, but I’m not going to dig into all of them here.

I’d urge you to reach out to Nextcloud GmbH if this is an enterprise situation (that’s unclear). That’s what they’re there for. :slight_smile:

A few thoughts, queries, and observations:

PostgreSQL + SSL connectivity from Nextcloud seems to work for me:

ncdb=# select datname,usename,ssl,client_addr from pg_stat_ssl join pg_stat_activity on =;
 datname |  usename   | ssl | client_addr
 ncdb    | ncuser     | t   |
 ncdb    | oc_ncadmin | t   |
 ncdb    | oc_ncadmin | t   |
 ncdb    | oc_ncadmin | t   |
 ncdb    | oc_ncadmin | t   |
 ncdb    | oc_ncadmin | t   |

Why are you trying to avoid using chunking? Chunking is how you upload large files these days without having to have crazy parameters on the web path and a fragile upload infrastructure. Are you possibly thinking of the sub-optimal interaction with multipart uploads on S3 object storage destinations? NC v26+ changed that. And, in any case, you’d still want to use chunking (which predates S3 optimized multipart support).

Most people that bring up chunking+multipart+S3 uploading seem to be concerned about performance, but it’s unclear what your concerns are on that front.

Have you attempted to use Google Cloud Storage’s S3-compatible API?

What do you mean by “multistorage”? Are you referring to multipart uploads? Multi-region? Or maybe multibuckets?

You also stated:

Volunteers should be interested, because this product is wonderful and implementing it on a large scale in any of the current cloud services would represent a solution for cases like mine where I have so many files. I will try anyway to test it (Kubernets + PGSQL multi instance + Multi Objet store +Redis +imaginary).

It’s not for lack of interest - those deployments exist, but those people are often getting paid or paying others to design/deploy/maintain them. :wink: You may get some responses here, but if so it’ll be out of pure luck (or patience!). In general, most non-enterprise deployments do not need (let alone use) Kubernetes, multiple db servers, multiple object stores, etc. :slight_smile:

If you need suggestions on a specific timeline or level of trust you’ll want to reach out to Nextcloud GmbH (corporate). They have sales people, consultants, and provide access to the core developers for precisely this sort of situation. Well that or you’ll be doing lots of leg work searching around, testing yourself, etc. like most/many of us. :slight_smile:

I would not assume a lack of native Google Cloud Storage API support in NC implies a lack of sizable deployments. They’re out there in both the public cloud and private clouds.

For one, many use cases get by just fine without object storage (but that’s a different discussion). Addiitionally, S3 compatible APIs have sort of become a de facto Object Storage API.

See MinIO, Ceph (often for private cloud/in-house deployments), and all the public cloud providers you named yourself you were testing against as S3 compatible examples. To some extent that coverage has reduced the need for non-S3 API support (but not entirely).

I understand your preference to use Google Cloud if that’s where you have other infrastructure. That doesn’t mean you have to use their native Object Storage. (It also doesn’t mean you have to stick with it for everything, but that’s a deeper discussion.)

I did some quick and dirty tests prompted by your post using Google Cloud Storage’s current S3 API. My multipart uploads (all >2GB) went fine from both NC v25 and v26 test installations. Keep in mind I didn’t look too closely as I was just being curious!

I’d encourage you to do some further testing against S3-compatible APIs @ Google (since that seems promising - particularly since they appear to have added more S3 compatible multipart upload support in late 2021 so they’re might even be some additional optimizations in there if it turns out to truly be compatible with AWS’s in that area).

If for some reason that doesn’t work out, another thought that pops into mind is using MinIO within Google’s infrastructure (MinIO | MinIO for Google Kubernetes Engine), but if you can use Google’s own S3 API that’d be even better to avoid the hassle of course.

Again, reach out to Nextcloud GmbH if this is an enterprise situation. And, if not, well I suspect you’ll have some notes to add to the community about your adventures. :slight_smile: Good luck!


Awesome, I will check then in detail everything you have told me, I will try first to do it alone, and if it doesn’t work I will knock the doors with GmbH, I have never tried with MiniIO… I will look into it, I will go check in detail the compatibility of s3 with google, I am really with them, because here I have everything (Analytics, datastore, Dialogflow, among many things.) Actually In google is optional to use postgresql with SSL, but I had tried in digitalocean and I could not connect the database, as this is a requirement.
Thank you very much for investing your time and I hope I can do something with Nextcloud, I think it is the only thing I can compete with Gdrive, Onedrive and Sharepoint especially to present these data that are really more historical than files in production.

1 Like

I have done it…save in cloud storage in GCP, for now I will go slow, and I will only publish 50TB, I have set up the database with postgres and a cluster service with balancers for now with 4 pods.

I will use the integration with SSO for azure, My fear now is the bucket side encryption, which I think is a great and perfect idea, but if something happens to that database, I will have to leave the country where I live :smiley:


@Weimar-Meneses your solution would make a great lightning talk . I would be interested to see what your have built/building

With pleasure, I am still struggling with the deletion of objects.

Nextcloud office does not work, cannot open files