To give a clearer indication to the run times I’m seeing:
www-data@fileserver.linaro.org:~$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:07:14 UTC 2020
Thu 18 Jun 07:31:24 UTC 2020
www-data@fileserver.linaro.org:~$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:31:48 UTC 2020
Thu 18 Jun 07:51:38 UTC 2020
www-data@fileserver.linaro.org:~$ date; php -f /srv/nextcloud-18/cron.php; date
Thu 18 Jun 07:51:48 UTC 2020
Thu 18 Jun 08:13:05 UTC 2020
There are three recurring errors during the run:
[documentserver_community] Error: OCA\DocumentServer\Document\DocumentConversionException: change_code: a?this.Nb.splice(d+1,this.Nb[0]-d):this.Nb[0]!==d+1&&0!==a&&(this.Nb.splice(d+2,this.Nb[0]-d-1),this.Nb[0]=d+1)}0===a&&this.Nb[0]!==d+1&&(this.Nb.splice(d+1,0,this.Nb.length-d-1),this.Nb[0]=d+1);c=1+this.Nb[0]+this.Nb[d+1]+2*a;this.Nb[c+0]=0;this.Nb[c+1]=0;return 0!==d||0!==a?this.Nb[c-1]:0};nb.prototype.saa=function(d,a,c,e){d=this.lxb(d,a);this.Nb[d+0]=c;this.Nb[d+1]=e};nb.prototype.qYb=function(d,a,c){this.Nb[this.lxb(d,a)+1]=c};nb.prototype.BG=function(){this.bb&&(this.bb.ey.b8d(),this.bb.kd.Z7a(Kl))};
change: Uncaught TypeError: Cannot read property 'b8d' of undefined
DoctRenderer:<result><error index="18500" /></result>
at <<closure>>
0. /srv/nextcloud-18/apps/documentserver_community/lib/DocumentConverter.php line 123
OCA\DocumentServer\Document\ConverterBinary->run("/tmp/oc_tmp_Ij1pjP-.xml", null)
1. /srv/nextcloud-18/apps/documentserver_community/lib/DocumentConverter.php line 78
OCA\DocumentServer\DocumentConverter->runCommand(OCA\DocumentServ ... {})
2. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 151
OCA\DocumentServer\DocumentConverter->saveChanges("*** sensitive parameter replaced ***", ["78;AgAAADEA//8 ... "], "/srv/nextcloud- ... x", "docx")
3. /srv/nextcloud-18/apps/documentserver_community/lib/LocalAppData.php line 100
OCA\DocumentServer\Document\DocumentStore->OCA\DocumentServer\Document\{closure}("*** sensitive parameters replaced ***")
4. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 164
OCA\DocumentServer\LocalAppData->getReadWriteLocalPath(OC\Files\SimpleFS\SimpleFolder {}, Closure {})
5. /srv/nextcloud-18/apps/documentserver_community/lib/Document/SaveHandler.php line 56
OCA\DocumentServer\Document\DocumentStore->saveChanges(619082024, [OCA\DocumentSer ... }])
6. /srv/nextcloud-18/apps/documentserver_community/lib/BackgroundJob/Cleanup.php line 69
OCA\DocumentServer\Document\SaveHandler->flushChanges(619082024)
7. /srv/nextcloud-18/lib/public/BackgroundJob/Job.php line 79
OCA\DocumentServer\BackgroundJob\Cleanup->run(null)
8. /srv/nextcloud-18/cron.php line 125
OCP\BackgroundJob\Job->execute(OC\BackgroundJob\JobList {}, OC\Log {})
at 2020-06-18T07:52:48+00:00
[documentserver_community] Error: OC\User\NoUserException: Backends provided no user object at <<closure>>
0. <<closure>>
OC\Files\Node\Root->getUserFolder("ciaran.moran")
1. /srv/nextcloud-18/lib/private/Files/Node/LazyRoot.php line 65
call_user_func_array([OC\Files\Node\Root {},"getUserFolder"], ["ciaran.moran"])
2. /srv/nextcloud-18/lib/private/Files/Node/LazyRoot.php line 282
OC\Files\Node\LazyRoot->__call("getUserFolder", ["ciaran.moran"])
3. /srv/nextcloud-18/apps/documentserver_community/lib/Document/DocumentStore.php line 134
OC\Files\Node\LazyRoot->getUserFolder("ciaran.moran")
4. /srv/nextcloud-18/apps/documentserver_community/lib/Document/SaveHandler.php line 56
OCA\DocumentServer\Document\DocumentStore->saveChanges(43435096, [OCA\DocumentSer ... }])
5. /srv/nextcloud-18/apps/documentserver_community/lib/BackgroundJob/Cleanup.php line 69
OCA\DocumentServer\Document\SaveHandler->flushChanges(43435096)
6. /srv/nextcloud-18/lib/public/BackgroundJob/Job.php line 79
OCA\DocumentServer\BackgroundJob\Cleanup->run(null)
7. /srv/nextcloud-18/cron.php line 125
OCP\BackgroundJob\Job->execute(OC\BackgroundJob\JobList {}, OC\Log {})
at 2020-06-18T07:52:00+00:00
[files] Error: Backends provided no user object for ciaran.moran
at 2020-06-18T07:52:00+00:00
This has been upgraded from OwnCloud. I took it through many versions of NextCloud to get it up to date in a supported manner.
I don’t know how to rule that out. What should I be looking for in the logs that might indicate such a problem? Or are there any occ commands I can run that might help here?
how did you chekc this, i am facing a similar issue. Cron takes 20 minutes each time, set the CRON job to every 30 minutes, but cpu spikes and it takes 20 minutes. Almost no users (<10), not much data, we use it for SMB browsing our NTFS shares (Windows).
How did you figure out the OCA\Files\BackgroundJob\ScanFiles, took that amount of seconds?
In my case and in many others, the cron jobs were running for severel hours, even days without finishing. For this, you can use something like top , htop or similar.
The runtime of a task is logged in the jobs database, just look it up. Also something I figured out is, that the ScanFiles job only runs twice a day, so if it does not not take more then 12 hours you should be fine
But if you want to dive deeper into this issue, this is the actual BUG for it
By executing (during the cronjob): SHOW ENGINE INNODB STATUS; I saw that the “oc_jobs” table was causing the issue.
mysql> DELETE FROM oc_jobs;
ERROR 2006 (HY000): MySQL server has gone away -> due to a restart of MySQL db on another session
No connection. Trying to reconnect…
Connection id: 2
Current database: ncdb
@christianpoortvliet I am not sure deleting the whole oc_jobs content is a good solution. Especially because the jobs table is used to run recurring jobs which are not directly handled by the linux cron daemon.
For now, as far as I understand the bug ticket, the preview-generator app could have something to do with all of it.
For now it seems that occ files:scan-app-data solves the problem for some users. Also a link to using the preview-generator app is likely, but yes the progress is slow, seems quite difficult to address “right”.
For now, I if you using the preview-generator app, try to scan the app-data manually, if this does not help, jump onto the github issue and if are keen, do what I did
Delete the previews form the filesystem and the database
FWIW I’m still experiencing problems with this. Twice a day the server this task runs away to the point Monit needs to reboot the server of I do so manually if I’m currently doing work with it or on it. While I don’t recall offhand, but I quite sure this has been an ongoing problem for just under a year now. Ironically I was unable to send this email when finished because of said runaway process. Had to reboot again in order to send this out. - Mark
Hi,
I am running nextcloud 20 smoothly for mounth on a up to date buster.
I have run in a similar issue of cron lasting for hours.
I had multiple cron.php running.
I did not find the reason.
So I am setting a script not yet finished to manage the issue with the intent to have an automatic kill after a given amount of time and avoiding launching multiple instances if it runs for long.
Here is the script
cat /usr/local/bin/nextcloudcron
#!/bin/bash
if [ $(/usr/bin/pgrep -fc "/var/www/nextcloud/cron.php") -lt 1 ]
then
alertduration=$((4*60*60))
timeout=$(($timeout + 5*60))
SECONDS=0
# timeout -v $timeout /usr/bin/php -f /var/www/nextcloud/cron.php
/usr/bin/php -f /var/www/nextcloud/cron.php
duration=$SECONDS
if [ $duration -gt $alertduration ]
then
/bin/date +"%H:%M:%S nextcloud cron Duration = $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed. timeout=$timeout ; alertduration=$alertduration"
fi
else
/bin/date +"%H:%M:%S cron Duplicate"
fi
Naturaly you have to change the cron.
The question I do have yet : Is it safe to kill this cron.php when it last for more than 4 hours for example ?
#!/bin/bash
if [ $(/usr/bin/pgrep -fc "/var/www/nextcloud/cron.php") -lt 1 ]
then
alertduration=$((5*60))
timeout=$((20*60*60))
SECONDS=0
/usr/bin/timeout -v $timeout /usr/bin/php -f /var/www/nextcloud/cron.php
#/usr/bin/php -f /var/www/nextcloud/cron.php
duration=$SECONDS
if [ $duration -gt $alertduration ]
then
/bin/date +"%H:%M:%S nextcloud cron Duration = $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed. timeout=$timeout ; alertduration=$alertduration"
fi
#else
# /bin/date +"%H:%M:%S cron Duplicate"
fi
Which times out after 20 hours.
Since my post, I have seen long run of about 5 minutes and max 40 minutes.
So the timeout did not occure.
The “long run” happens at a time not regular.
But in May the “issue” drift 1 hour each 4 days… strange.
So far I only have 2 days of logs to look at but it’s not run more than 18 seconds at a time.
I love your use of the timeout command. I’m going to look into that. However right now my primary concern is not killing the process but making sure another one does not start up. Since this is doing DB work, I want it to finish.