Nextcloud takes all of my RAM

So we are in the same boat there.

This is my suspicion, yes.

I would personally only use a Raspberry Pi 4 for a 1-user Nextcloud install (as they are very reasonably cheap, and a good value for what you get), and yes, I would put NC 17 on it at this time. And this is with the Nextcloud data dir and DB moved out to some decently fast USB3-attached storage.

A Raspberry Pi 3 is just a little too slow for comfort, even for 1 user (and I’ve tried it), I say. I would personally use a Raspberry Pi 3 for, say, a Pi-Hole.

Ok thank you for the clarification :slight_smile:

Hi all,

So the only idea I have left (except buying a Raspberry Pi 4) is to try migrating from version 18 to version 17. Is it easily feasible with docker? If I just change the image tag but keep the same docker volumes, will it work?

I think I managed to limit the RAM usage by changing the MPM configuration of Apache (in /etc/apache2/mods-enabled/mpm_prefork.conf):

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxRequestWorkers: maximum number of server processes allowed to start
# MaxConnectionsPerChild: maximum number of requests a server process serves

<IfModule mpm_prefork_module>
	StartServers			 2
	MinSpareServers		  2
	MaxSpareServers		 3
	MaxRequestWorkers	  3
	MaxConnectionsPerChild   0
</IfModule>

The default values were way too high for my Raspberry Pi 3. For instance, when loading the Photos tab, one Apache process takes 150 MB. With the default values, Apache tries to run up to 150 processes. I reduced it to 3, which is the maximum my Raspberry can handle. As a consequence, loading the Photos page for instance is very slow, but at least now my server doesn’t crash. The other pages load well, because they contain less heavy files.

To conclude, I agree with @esbeeb on the fact that a Raspberry Pi 3 might not be enough to use Nextcloud comfortably. I don’t think it is related to the version of Nextcloud, it is just a matter of configuring Apache accordingly to the server’s capacity, which is weak in my case.

2 Likes

Remark : I calculated the number of workers based on the RAM usage when loading the Photos page, but I notice that on other pages a worker doesn’t take much RAM. So I might use more than 3 workers.

For Nextcloud 17, I personally wouldn’t buy a Raspberry Pi 4 either. I would suggest a refurbished Lenovo Thinkpad X230 at least. These can be bought with an i5 CPU and 4GB of RAM for $142 USD plus shipping (where I live). The i5 CPU would be a quantum leap in performance over a Raspberry Pi 4. Maybe toss in an SSD, and you’d have true SATA speeds and reliability.

You get so much more all-around horsepower for just a little more money.

Your efforts to fine tune Nextcloud are commendable, but there comes a point where just “going with the flow” (of Nextcloud being designed to expect more horsepower) will save you time and hassle, and be well worth that bit more money.

1 Like

I have been having similar issues. I think there are a couple of solutions.

I think the most likely issue is the thumbnail generation, I found that even < 1GB of files, the server will fire off a ton of requests to generate thumbnails, and if you’re using external storage - this will take even longer.

This is where the apache processes above are eating up your RAM. But that’s because they are told how much they’re allowed to serve and how long to live for.

One thing that really helps for this specific issue is to use the preview generator.

https://apps.nextcloud.com/apps/previewgenerator

You can run this on your cron and it will pick up any new images and generate thumbnails for them. Because it doesn’t go through apache, there is barely any overhead, it just works through your images one by one. I’ve been running this on a Digital Ocean droplet, and even after increasing the server to 8GB RAM (as a temporary test of course!) it still ate up all the RAM. But running this script, it sat in the backrgound and generated everything with no issue!

Secondly, something I touched on earlier, there are settings within apache to decide how much a worker is allowed to serve and also how long they live for. I can’t remember the exact settings, just about to look for it now myself. But we have done similar at work.

Essentially the default is 100 for serving, but this can be frequently changed to 2000 or even 5000 for bigger servers. You can give this a gentle increase and see how it goes, if no difference, try a bit more.

The other setting is how long they wait before they die. The default is 5 seconds, you can safely reduce this down to 3. Once the apache workers die, the RAM is freed up.

2 Likes

Hmm, I’ve been running Nextcloud on a Raspberry Pi 3 with three users since 2017 without any problems whatsoever. Absolutely fantastic. Now it autoupgraded to the latest version, and now it is using up all the RAM and klling the Pi. Regularly cannot sync files and end up e-mailing stuff from one computer to another.

I’m not very happy. I’m actually more likely to look for a different sync solution now than to buy new hardware.

I saw this post after searching ram usage related to clamav and I noticed there are additional setup instructions for image preview generator while looking at the app page.

You need to run the cron manually to generate initial image cache. Then you need to enable the cronjob. See the app page and read description for more information.