Cron job won't work. v25.0.4. Really struggling here

Just “what f-ing works” or not? :slight_smile:

Finally, Ernolf suggested I made sure to add the same data into /etc/cron.allow.
That is, adding

*/5 * * * * www-data php -f /var/www/nextcloud/cron.php

… to /etc/crontab. I did so. To my naked eye … the ‘allow’ label suggested there might’ve been an access limitation for the cron to get to. Seems to me that this didn’t play any crucial role in the road you chose to help me down on. Though I’m sure it wasn’t a bad suggestion.
Would I be right about that assumption? And having added that information to cron.allow now … should I remove it? Risk for some kind of crossfire, having identical code commands in with the cronjob and cron.allow?

We are venturing a little into the more exotic world of Linux file system and PAM here, so sorry for a vahue response:
Now that you have it defined in there, you probably will never have to change it. However I will define it like this:

*NIX distros are designed with security in mind. Most *NIX runs in enterprises. In enterprises you values stability, uptime, segregations and siloing for hardened security.

WHen using that method, you are modifying one central cron script. In other scenarios than most home projects, there will be several services running on same *NIX. So several different services - under different accounts (like www-data, NGINX, haproxy, aegis, slapd etc) - which all individaully from each other needs scheduled jobs.
When using the method of one cron comamnd script, then you will have several cron annotations/commands in same file, for different accounts (services). So any time you needs to change one or add new, you enters same file. The risk of altering the wring crontab entry grows significantly, the more there is in it. On top of that, then usually you would not allow admins to have access to everything, rather you limits in what scope they can work. This is why linux has cron comamnds for each user, and each user has their own seperate script. Not only will you not be able to see super secret passwords of the monitoring account which connects with user name and password in a cron job, just because you are admin of the webserver, but you can delete that one accounts cron script entirely, and have only messed up that one service. So now that it works for you, and you are probably the only admin on the server, let it live there. But should you wish to do the secure and “right” approach, you should do it the other way. Did it make sense?

Yes, I got the point. Thanks for explaining.

Since you mention this, allow me to just ask, lastly. The purpose of my Nextcloud setup, is to be a collab server, professionally. Small environment. Might have up to 15 users or so, 20 tops. But very seldom will the server serve more than 5 users logged in at the same time.

The servers main purpose is large file transfers between customers and collab partners. I work in media, so we’ll be storing and transferring mainly binary files between 30-400mb, sporadically reaching up close to 1gb.

So my priority will be transfer speed and transfer reliability, through the web interface. I’ll be recommending ppl to use Webdav for the largest transfers, but some people just … don’t listen :roll_eyes:

You say APCu handles files and the filesystem, and I’m assuming this at least partially includes file transfers as well? So I guess I will be interested in looking into a more efficient way of handling files, filesystem and transfers next, if APCu is “not the best”, as you say.

Unless I am misinterpreting APCu’s role regarding file transfers, would you care to suggest a good “next step up” from APCu for the above purposes on Nextcloud?

How much RAM does your server have?
https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html

It’s a mac mini, quad core 2,5ghz I5 CPU, 16gb RAM.
Need to use antivirus on it though, RAM-hog of course, got ClamAV running now.

You’re hinting that there’s not necessarily a ‘better’ cache handling option than APCu for these purposes? That APCu may well be able to deal with this, if the server has got juice enough to handle it and NC is tuned correctly?

No. The nextcloud official docs, mentions redis in its tuning guide. That could not be more direct. Use redis instead of APCu.
The links is both for tuning, and for enabling large file uploads and downloads

Ah, understood.
Let me say a big thanks to you and Ernolf for taking your time to help out a newbie. Really appreciated! :champagne:
Perhaps other can learn from this thread as well.
Couldn’t have done this without you. :+1: :100: