Running background jobs every minute instead of 15 mins

I have followed the nextcloud manual and enabled nexcloud cron job through systemd timer. The manual gave a default of every 15 minutes to execute /usr/bin/php -f /var/www/nextcloud/cron.php.

Is it recommended to run it every minute?

The command does a lot of background jobs. Depending on the load, it could happen that a single cron job takes more than 1 minute, and then you get conflicts. If you run it not enough, notifications mails are sent with a noticeable delay, the trash-bin or versioning files are not deleted fast enough, … So 15 minutes seems to be a reasonable trade off. On very low-activity systems you can increase the time (to allow disks to spin down, at least during the night).
Unfortunately, I don’t know what they do on larger systems with a few thousand users.

What if the cron job takes longer than 15 minutes (e.g. on very large instances)? Will there be any warning in the logs?

It could be that there is a locking file that the cronjob can run only once. If that is the case, you could probably reduce the time between cronjobs. Or at least there is a protection that there is no cronjob running before starting an update, so I suppose that applies for the cronjob itself as well.

In my case the cron job takes just some seconds, but I guess as well the trade-off with 15 minutes should be considered as best in most cases.

Running it minutely might break some caching effort, spams your syslog and such.
Also the cron.php seems to be optimized for 15 minutes call:
Also I remember that actual jobs have a schedule to just run, if a certain time passed. Not all of them are done every 15 minutes, but some just every hour or half an hour. So even if cron.php is called every minute, many jobs will simply be skipped, if not all when less than 15 minutes passed. Found the jobs definitions some time ago, but would need to recheck this.

So the most important question is, if you face any issues with the 15 minute call. In case there would need to be more done to a specific background job, than just calling cron.php more frequently.

Sorry to revive this, but I have had problems with this for quite some time and it is still a problem.

I was inspecting the cron.php file and also making some tests and noticed that there is no check to avoid running multiple cron.php instances. We have lots of external storages configured and some times these will take a long time to scan. So, many times I would inspect the server and see like 30+ different cron jobs trying to run. This is obviously a problem and should never happen since they will never finish and every 5mins a new cron job would run on top of it.

Even though I think the cron.php script itself should check for this, it seems like the systemd option would also fix the problem. So, for now I disabled the cronjob and configured it as a systemd service/timer as described in:

I have also made it go back to 15min intervals, instead of 5min.

I am still not 100% sure the systemd timer will avoid running multiple instances, but all the info I could find pointed me that way.

Would still be nice to have the cron.php check if it is already running.

Sounds reasonable. As definite solution you could create a shell script around the cron job that pgrep first for existing cron.php instances to either exit or run another cron. Less frequent cron jobs could just lead to longer execution times, so probably it has not that much effect.

But since you mention many external storages:

  • Do users on that Nextcloud instance add the same external storages?
  • In that case it should be much more efficient if only one user or admin adds them and shares contained files/dirs within Nextcloud with all the others.
  • Another reason for this is the otherwise multiple oc_filescache entries for the same files that need to be scanned, stored and edited when anyone does a change.
  • I observed MariaDB tables of dozens of GiB when all members of an office add all the same external storages themselves, which creates an enormous overhead and in one particular case overloaded the database to then cause corruption as final result.

It seems that they removed it:

I’d ask the developers, they recommended to lower the cronjob interval.