Intro
After the last update from 28.0.6 to 29.0.1, I realized that my logfile was quite large. It was larger than 50MB, which is an odd size for my instance.
I am the only user and rotate yearly; so far, the log was never larger than 2,5 MB.
(trying to) analyze the issue
Using awk I managed to identify 6 (only six!) lines that made up almost the entire 50 megabytes. 3 lines were more than 10 MB long (line length 108739898) and another 3 were around 5444091 long.
This prints the length of the longest lines in your log, and it also indicates the line number:
awk '{print length " - line: " NR}' nextcloud.log |sort -rn|head
Timeline and possible cause (Please chime in)
Extract of updater.log
2024-05-31 09:24:23 updater cli is executed
2024-05-31 09:27:31 update of code successful.
2024-05-31 09:27:31 updater run in non-interactive mode - occ upgrade is started
2024-05-31 09:28:01 updater run in non-interactive mode - disabling maintenance mode
2024-05-31 09:28:07 maintenance mode is disabled - return code: 0
So far, so good. We’ve now established the exact times of the upgrade and can correlate with the nextcloud.log.
The first very long line in the logfile is logged at 2024-05-31 09:27:47 - so it happens during the upgrade process. Obviously, it wouldn’t make sense to post even a single line here (larger than 10 MB, remember?). I’ll go with the first 220 characters of each line.
Extract of nextcloud.log (only the very long lines)
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `path` FROM `*PREFIX*filecache`
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `fileid` FROM `*PREFIX*filecache
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `path` FROM `*PREFIX*filecache`
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `filecache`.`fileid`, `storage`,
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `filecache`.`fileid`, `storage`,
{"reqId":"OdkfrSYEW9oLOZbz1rcO","level":0,"time":"2024-05-31T09:27:47+00:00","remoteAddr":"","user":"--","app":"no app in context","method":"","url":"--","message":"dirty table reads: SELECT `filecache`.`fileid`, `storage`,
- All six lines have the same reqId.
- All six lines are about “dirty table reads”
Reproducible
I removed the overly long lines from my logfile. Afterwards, the logfile was down to 2,3 MB.
I just updated from 29.0.1 to 29.0.2 and my logfile is now a whopping 70MB. I checked and again there were these same very, very long lines.
Conclusion
None so far - this is a work in progress. I’ll update my findings, hoping that someone can shed some light on this. Perhaps others may find this useful.