I’m running microk8s single node and had nextcloud working great, then rebooted the machine and it won’t come back. I’m at a loss as there is not real logs. When I describe the pod I see an exit code 135 and that’s it.
Name: nextcloud-app-668848b97b-xq4cv
Namespace: nextcloud
Priority: 0
Node: hv03/192.168.200.222
Start Time: Tue, 07 Jun 2022 12:14:50 -0400
Labels: app=nextcloud-app
pod-template-hash=668848b97b
service=nextcloud-service
Annotations: cni.projectcalico.org/containerID: f5e9b9605c2ce86ce92cca498bf94db8377affe890fc3a9bdec4bf7480d6fd40
cni.projectcalico.org/podIP: 10.1.53.133/32
cni.projectcalico.org/podIPs: 10.1.53.133/32
Status: Running
IP: 10.1.53.133
IPs:
IP: 10.1.53.133
Controlled By: ReplicaSet/nextcloud-app-668848b97b
Init Containers:
changeowner:
Container ID: containerd://ec5b763381c083572b7941e7007711c41c746885aa5ccf8581ebcce45b59bf68
Image: busybox
Image ID: docker.io/library/busybox@sha256:3614ca5eacf0a3a1bcc361c939202a974b4902b9334ff36eb29ffe9011aaad83
Port: <none>
Host Port: <none>
Command:
sh
-c
chown -R 33:33 /srv/nextcloud
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 08 Jun 2022 09:37:01 -0400
Finished: Wed, 08 Jun 2022 09:37:01 -0400
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/srv/nextcloud/data from nextcloud-claim1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpf75 (ro)
Containers:
nextcloud:
Container ID: containerd://d853219cac46da757ffceb2ab6875e5619a686cfbb31d8eba6495effa9a85aa6
Image: nextcloud:apache
Image ID: docker.io/library/nextcloud@sha256:80bb4afee2a0b50524b427f71838538616c84b3630b3e11de357298fbcf49477
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 135
Started: Wed, 08 Jun 2022 09:52:57 -0400
Finished: Wed, 08 Jun 2022 09:52:58 -0400
Ready: False
Restart Count: 257
Environment Variables from:
nextcloud-app-secret Secret Optional: false
nextcloud-config-5-20 ConfigMap Optional: false
Environment: <none>
Mounts:
/srv/nextcloud/data from nextcloud-claim1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpf75 (ro)
/var/www/html from nextcloud-claim0 (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nextcloud-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nextcloud-claim0
ReadOnly: false
nextcloud-claim1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nextcloud-claim1
ReadOnly: false
kube-api-access-zpf75:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
This is the only events in the logs
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m37s (x69 over 19m) kubelet Back-off restarting failed container
Logs from the pod show nothing
Conf remoteip disabled.
To activate the new configuration, you need to run:
service apache2 reload
Configuring Redis as session handler
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.53.133. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.53.133. Set the 'ServerName' directive globally to suppress this message
So I’m at a loss. I found an article on hugepages from IBM which I did enable to try out the mayastor microk8s stuff but then put it back and disabled it
The exit code 135 can occur when huge pages is enabled in the system configuration (reference
https://github.com/docker-library/postgres/issues/451
).
The solution can be to set vm.nr_hugepages = 0 in /etc/sysctl.conf if it was set to non-zero, then reboot the system to have the new configuration take effect.
sysctl vm.nr_hugepages=0
So it seems like there might be a bug with mayastor or microk8s?