I’ve noticed now a couple of times that my set-up’s panel and admin user passwords seem to reset when the container does an automatic update. When you try to log in with the ncp user using the normally set up password it rejects you and you have no access.
It’s easy to cure, just by going to -IP ADDRESS-:4443/activate and going through the activation again, then using the new random password generated you can get back into the panel and reset both passwords again (both show up as the new random ones in the panel).
Just a FYI bug report - not a difficult one to get around but could trip people up if they don’t know about it or how to get to the activation screen manually (or don’t think to try it again).
NextCloudPlus 0.54.1 running on a Pi3 under HypriotOS using the container from the docker hub.
what do you mean by automatic update in this case?
I was initially thinking of updates in the panel, but that’s a manually triggered one.
Now I stop to think about it a little more, I think the culprit is more likely the Watchtower docker container (https://hub.docker.com/r/talmai/rpi-watchtower/) which I also have on my set-up. Looking at the hub I see you uploaded a new image 3 days ago, so it is probably that which has changed things by updating to a new container for that update?
Where are the admin user and panel passwords stored, in the container itself somewhere or somewhere external like the ncdata volume that would persist if the container was removed and recreated?
As I said it’s not a major issue or inconvenience if you know how to get back to the activation screen, but it could trip people up if they’re not expecting it.
yeah it’s weird… the password are stored in persistent volume
thanks for reporting
That was what I was expecting, hence why I flagged it.
Once you reset the ncp user passwords everything works again, but until you do then you can’t access via app using other users accounts. However once things are reactivated again then everything is still there (other user accounts, data etc) and it all just starts working normally.
I’m now convinced it’s due to the update of the version and the newly run container triggered by Watchtower, but odd given the volume is obviously persisting correctly as I’m not having to set everything up from fresh.