Cron.php takes too long and can end up running multiple times

To give a clearer indication to the run times I’m seeing:$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:07:14 UTC 2020
Thu 18 Jun 07:31:24 UTC 2020$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:31:48 UTC 2020
Thu 18 Jun 07:51:38 UTC 2020$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:51:48 UTC 2020
Thu 18 Jun 08:13:05 UTC 2020

There are three recurring errors during the run:

[documentserver_community] Error: OCA\DocumentServer\Document\DocumentConversionException: change_code: a?this.Nb.splice(d+1,this.Nb[0]-d):this.Nb[0]!==d+1&&0!==a&&(this.Nb.splice(d+2,this.Nb[0]-d-1),this.Nb[0]=d+1)}0===a&&this.Nb[0]!==d+1&&(this.Nb.splice(d+1,0,this.Nb.length-d-1),this.Nb[0]=d+1);c=1+this.Nb[0]+this.Nb[d+1]+2*a;this.Nb[c+0]=0;this.Nb[c+1]=0;return 0!==d||0!==a?this.Nb[c-1]:0};nb.prototype.saa=function(d,a,c,e){d=this.lxb(d,a);this.Nb[d+0]=c;this.Nb[d+1]=e};nb.prototype.qYb=function(d,a,c){this.Nb[this.lxb(d,a)+1]=c};nb.prototype.BG=function(){,};
change: Uncaught TypeError: Cannot read property 'b8d' of undefined
DoctRenderer:<result><error index="18500" /></result>
 at <<closure>>

0. /srv/nextcloud-18/apps/documentserver_community/lib/DocumentConverter.php line 123
   OCA\DocumentServer\Document\ConverterBinary->run("/tmp/oc_tmp_Ij1pjP-.xml", null)
1. /srv/nextcloud-18/apps/documentserver_community/lib/DocumentConverter.php line 78
   OCA\DocumentServer\DocumentConverter->runCommand(OCA\DocumentServ ... {})
2. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 151
   OCA\DocumentServer\DocumentConverter->saveChanges("*** sensitive parameter replaced ***", ["78;AgAAADEA//8 ... "], "/srv/nextcloud- ... x", "docx")
3. /srv/nextcloud-18/apps/documentserver_community/lib/LocalAppData.php line 100
   OCA\DocumentServer\Document\DocumentStore->OCA\DocumentServer\Document\{closure}("*** sensitive parameters replaced ***")
4. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 164
   OCA\DocumentServer\LocalAppData->getReadWriteLocalPath(OC\Files\SimpleFS\SimpleFolder {}, Closure {})
5. /srv/nextcloud-18/apps/documentserver_community/lib/Document/SaveHandler.php line 56
   OCA\DocumentServer\Document\DocumentStore->saveChanges(619082024, [OCA\DocumentSer ... }])
6. /srv/nextcloud-18/apps/documentserver_community/lib/BackgroundJob/Cleanup.php line 69
7. /srv/nextcloud-18/lib/public/BackgroundJob/Job.php line 79
8. /srv/nextcloud-18/cron.php line 125
   OCP\BackgroundJob\Job->execute(OC\BackgroundJob\JobList {}, OC\Log {})

at 2020-06-18T07:52:48+00:00
[documentserver_community] Error: OC\User\NoUserException: Backends provided no user object at <<closure>>

0. <<closure>>
1. /srv/nextcloud-18/lib/private/Files/Node/LazyRoot.php line 65
   call_user_func_array([OC\Files\Node\Root {},"getUserFolder"], ["ciaran.moran"])
2. /srv/nextcloud-18/lib/private/Files/Node/LazyRoot.php line 282
   OC\Files\Node\LazyRoot->__call("getUserFolder", ["ciaran.moran"])
3. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 134
4. /srv/nextcloud-18/apps/documentserver_community/lib/Document/SaveHandler.php line 56
   OCA\DocumentServer\Document\DocumentStore->saveChanges(43435096, [OCA\DocumentSer ... }])
5. /srv/nextcloud-18/apps/documentserver_community/lib/BackgroundJob/Cleanup.php line 69
6. /srv/nextcloud-18/lib/public/BackgroundJob/Job.php line 79
7. /srv/nextcloud-18/cron.php line 125
   OCP\BackgroundJob\Job->execute(OC\BackgroundJob\JobList {}, OC\Log {})

at 2020-06-18T07:52:00+00:00
[files] Error: Backends provided no user object for ciaran.moran

at 2020-06-18T07:52:00+00:00

First execute cron.php only each hour or only each day for testing.
Check that there are no other cron.php-jobs . (top, htop).

Have you upgraded from OwnCloud? Perhaps something is wrong in your database or in your nextcloud installation.

This has been upgraded from OwnCloud. I took it through many versions of NextCloud to get it up to date in a supported manner.

I don’t know how to rule that out. What should I be looking for in the logs that might indicate such a problem? Or are there any occ commands I can run that might help here?

You can make integrity checks e.g. for apps.

Do you have errors in your nextcloud-admin-overview - information?
Perhaps post them

I am currently facing a similar issue.
For me the average execution time of cron.php is 14 minutes.

Nextcloud version: 19.0.0
Operating system and version: Debian GNU/Linux 10 (buster)
PHP version: 7.4.6

Is there maybe a possibility to debug cron.php? (did not found any entry in nextcloud.log)

I am facing the same, since upgraded to 19.x
Any tips to figure out why a cron is running for over 1,5 hours and not getting finished?

I figured out, using the oc_jobs table that the task OCA\Files\BackgroundJob\ScanFiles already lasted for 6747 seconds (if the value is in seconds).

1 Like

Update on this.
The last run for OCA\Files\BackgroundJob\ScanFiles took 46367 seconds and I do not understand why.

1 Like

how did you chekc this, i am facing a similar issue. Cron takes 20 minutes each time, set the CRON job to every 30 minutes, but cpu spikes and it takes 20 minutes. Almost no users (<10), not much data, we use it for SMB browsing our NTFS shares (Windows).

How did you figure out the OCA\Files\BackgroundJob\ScanFiles, took that amount of seconds?

In my case and in many others, the cron jobs were running for severel hours, even days without finishing. For this, you can use something like top , htop or similar.
The runtime of a task is logged in the jobs database, just look it up. Also something I figured out is, that the ScanFiles job only runs twice a day, so if it does not not take more then 12 hours :wink: you should be fine :smiley:

But if you want to dive deeper into this issue, this is the actual BUG for it

By executing (during the cronjob): SHOW ENGINE INNODB STATUS; I saw that the “oc_jobs” table was causing the issue.

mysql> DELETE FROM oc_jobs;
ERROR 2006 (HY000): MySQL server has gone away -> due to a restart of MySQL db on another session
No connection. Trying to reconnect…
Connection id: 2
Current database: ncdb

Query OK, 257192 rows affected (24.69 sec)

sorted, now cron takes 2 seconds.

@christianpoortvliet I am not sure deleting the whole oc_jobs content is a good solution. Especially because the jobs table is used to run recurring jobs which are not directly handled by the linux cron daemon.

For now, as far as I understand the bug ticket, the preview-generator app could have something to do with all of it.

Any progress on this problem? This thread started last June…

In my case it does seem to be related to Preview generator.

They are working on a solution

For now it seems that occ files:scan-app-data solves the problem for some users. Also a link to using the preview-generator app is likely, but yes the progress is slow, seems quite difficult to address “right”.

For now, I if you using the preview-generator app, try to scan the app-data manually, if this does not help, jump onto the github issue and if are keen, do what I did :slight_smile:

Delete the previews form the filesystem and the database :wink:

1 Like

This appears to have solved the problem for me as well. I’m running Nextcloud 20. Cron jobs were taking 15-30 minutes. Now it is less than a minute.

FWIW I’m still experiencing problems with this. Twice a day the server this task runs away to the point Monit needs to reboot the server of I do so manually if I’m currently doing work with it or on it. While I don’t recall offhand, but I quite sure this has been an ongoing problem for just under a year now. Ironically I was unable to send this email when finished because of said runaway process. Had to reboot again in order to send this out. - Mark

I am running nextcloud 20 smoothly for mounth on a up to date buster.
I have run in a similar issue of cron lasting for hours.
I had multiple cron.php running.

I did not find the reason.

So I am setting a script not yet finished to manage the issue with the intent to have an automatic kill after a given amount of time and avoiding launching multiple instances if it runs for long.

Here is the script :slight_smile:

cat /usr/local/bin/nextcloudcron

if [ $(/usr/bin/pgrep  -fc "/var/www/nextcloud/cron.php") -lt 1 ]
	timeout=$(($timeout + 5*60))

	# timeout -v $timeout /usr/bin/php -f /var/www/nextcloud/cron.php
	/usr/bin/php -f /var/www/nextcloud/cron.php
	if [ $duration -gt $alertduration ] 
		/bin/date +"%H:%M:%S nextcloud cron Duration = $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed. timeout=$timeout ; alertduration=$alertduration"
	/bin/date +"%H:%M:%S cron Duplicate"

Naturaly you have to change the cron.

The question I do have yet : Is it safe to kill this cron.php when it last for more than 4 hours for example ?

1 Like


Is there any progress on this?
I am still facing the same problem.
Multiple cron jobs that never ends…

1 Like

This is my version of your script.

I took out the alert part because I don’t want to cancel a job that is going too long. I’ll just let it run. But I never want a second job to fire up.

Also, I used logger to log the status to syslog.

Thanks for the help!

p.s. This is a kludge. The problem SHOULD be solved in the PHP that is executing. :slight_smile:


if [ $(/usr/bin/pgrep  -fc "/var/www/nextcloud/cron.php") -lt 1 ]
        /usr/bin/php -f /var/www/nextcloud/cron.php
        logger -s "$0 Duration = $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed. "
        logger "$0 is already running!"

In fact I do run this script :


if [ $(/usr/bin/pgrep  -fc "/var/www/nextcloud/cron.php") -lt 1 ]

	/usr/bin/timeout -v $timeout /usr/bin/php -f /var/www/nextcloud/cron.php
	#/usr/bin/php -f /var/www/nextcloud/cron.php
	if [ $duration -gt $alertduration ] 
		/bin/date +"%H:%M:%S nextcloud cron Duration = $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed. timeout=$timeout ; alertduration=$alertduration"
#	/bin/date +"%H:%M:%S cron Duplicate"

Which times out after 20 hours.

Since my post, I have seen long run of about 5 minutes and max 40 minutes.
So the timeout did not occure.
The “long run” happens at a time not regular.
But in May the “issue” drift 1 hour each 4 days… strange.

1 Like

So far I only have 2 days of logs to look at but it’s not run more than 18 seconds at a time.

I love your use of the timeout command. I’m going to look into that. However right now my primary concern is not killing the process but making sure another one does not start up. Since this is doing DB work, I want it to finish.

Can’t say that I’ve ever seen it go 20 hours! :slight_smile: