Yes. But if you use cron every 5 minutes it does not work because it runs multiple because of the long time of execution.
But perhaps it is a problem with your old operation system. With Ubuntu 20.04 LTS or Debian Buster 10 you get a newer php-version than php 7.0 . Which operation system is installed?
Both systems are supported but perhaps not a good solution for nextcloud.
You must use for running nextcloud a php-version from another source than the normal distribution.
In your database you have a table oc_jobs or jobs. This is a list of all jobs executed during a cronjob. There is even a column with execution duration.
With this list you can perhaps crosscheck with the apps installed if everything is working as supposed. From you apps, there is stuff like imageconverter which could create a lot of load on the system.
That is great, and how long took the run?
The 5 minute cycle was changed recently, before it was about 15 minutes. Drawback of longer cycles are that notification mails will be sent with a larger delay. Normally the cronjob creates a lockfile, so there shouldn’t be any overlap (the new process makes sure the old one finished).
if everything takes long, you might take a look into your database caching settings.
If you have redis, I’d use it as file-locking cache as well that takes away load from the normal database.
then it’s just a matter of time. Don’t forget to dump the jobs-table so you can track this issue. I’m not sure about the logs (perhaps needs to increase log level), perhaps there it’s mentioned which job is started, so in case of a crash you just need to check which one was started last.
I am having this exact problem, I have ensured the php config has been done correctly. This has only started after I have moved to nextcloud 19, could this be an issue with php7.4 ?
Nextcloud version: 19.0.0
Operating system and version: 20.04 LTS
Apache or nginx version: nginx/1.18.0 (fpm-fcgi)
Database: mysql 10.1.44
PHP version: 7.4.8
but it always seems to get stuck at the same time; once at 23:30 and at 5:30(CEST)
but I have to agree @Chris_Aldred , I’ve only had this problem since Nextcloud 19
Whereby only on the Ubuntu system, I manage another one on Debian and there it does not occur, the configuration is almost identical
You can run a cron afterwards to query the values from the database. Also the logs might give hints
I am not sure if all the jobs do something at each run. Like this, I’d think that these are some daily routines, not sure like for the trashbin and stuff like that. But I don’t know enough to tell you which has changed for NC 19. Perhaps change the run time a bit, perhaps on your ubuntu system it collides with a different cronjob.
Ok I have now set up the whole system on a brand new Debian server and migrated the data. But I still have the problem somehow.
I really don’t know what it is
I found a problem with the Maps app in my NC installation. Had over 3650 lines of Maps AddPhotoJob lines in my oc_jobs (the cron job table for NC) table.
I uninstalled the Maps app and deleted the jobs from my oc_jobs table with:
DELETE FROM oc_jobs WHERE class LIKE ‘%Maps%’ and argument LIKE ‘%photoId%’;
Now my cron runs through in seconds and uses less than 200 MB of memory.
That helped a lot, especially on a Raspberry Pi, which started swapping a few Gigabytes of RAM everytime the cronjob ran through.