Can't get my nextcloud-desktop client to sync again after updating from 12.0.0 to 12.0.1/login to NC very slow after update

nc 12.0.1.
running on a rpi3 with raspian 8 (lite)
php 7.0.19
database mysql 10.1.23

NC-desktop-client windows 2.3.1 build 8 running on win 7 pro sp 1 (64)

could be b/c NC is kinda reacting slow at login, now. maybe client gets a timeout. most def. but why?

who has got an idea? what information do i need to add?

umm… removed the account from win-client (was using the external URL to log in) and added a new account using the local IP and now it works brilliantly again. solved, i’d say - at least as for now.

Same problem here.

Upgraded from 12.0.0 to 12.0.1 today.
Slow login /authentication (30 sec to login) (not solved).
Nextcloud outlook plugin is also very slow (not solved).

The app for ios is a little bit of slower.

Also desktop client was not able to connect after upgrade.
Had to remove the account and re-add the account (solved).

Please assist. Thanks!

Regards!

should i re-open the case? @Stephan_Stoke ?

Yes please, Thanks Jimmy.:grin:

done… since you seem to have same problems like me. i mean even login to webinterface seems to take forever now, you’re right.
i don’t use outlook plugin on a regular basis (since it’s still buggy for me) so i can’t say anything about it.

Great!

“Glad” I’m not the only one with this problem. Do you use Local or LDAP users?
We use LDAP.

I have spoken to a developer and I have tested the latest dev version which is great. Not buggy :grin:

However “the plugin” is also slow when authenticating when files are being uploaded.

So something is wrong when authenticating and getting time outs or someting like that.
Hopefully someone can help us out here.

am using local users, only.

could you check your log under your admin-account. i’m getting dozens of login-requests… i dont know where exactly from. but it could be the win-client. but sometimes even bruteforce-app seems to be telling me that there was a bruteforce-try. from my own global ip.

With the local admin account I am not be able to login anymore telling me wrong password :open_mouth:
When login with a LDAP admin account I am be able to login and got the follwoing logs:

Debug cron Finished OC\Authentication\Token\DefaultTokenCleanupJob job with ID 14 in 0 seconds 2017-08-08T11:04:02+0200
Debug cron Invalidating remembered session tokens older than 2017-07-24T09:04:02+00:00 2017-08-08T11:04:02+0200
Debug cron Invalidating session tokens older than 2017-08-07T09:04:02+00:00 2017-08-08T11:04:02+0200
Debug cron Run OC\Authentication\Token\DefaultTokenCleanupJob job with ID 14 2017-08-08T11:04:02+0200
Debug cron Finished OCA\Files_Sharing\DeleteOrphanedSharesJob job with ID 11 in 0 seconds 2017-08-08T11:03:40+0200
Debug DeleteOrphanedSharesJob 0 orphaned share(s) deleted 2017-08-08T11:03:40+0200
Debug cron Run OCA\Files_Sharing\DeleteOrphanedSharesJob job with ID 11 2017-08-08T11:03:40+0200
Warning no app in context Missing expected parameters in change user hook 2017-08-08T11:03:37+0200
Warning no app in context Missing expected parameters in change user hook 2017-08-08T11:03:37+0200
Debug user_ldap No DN found for appdata_ochjk1mpq3u3 on “my domain controller” 2017-08-08T11:03:34+0200
Info admin_audit Login successful: “BCFF4703-097F-4BF1-B92A-5C539D3959F1”

Maybe: DELETE FROM oc_bruteforce_attempts;

1 Like

can’t tell as for today. since i noticed that only first logon of the day/user/and browser would take long. all others are quite fast again.
have disabled the bruteforce-app for the moment…

Hi,

After the update yesterday I also have an extremely slow Web GUI for all users. Especially the files app is slow. Other apps react quite quick a few times.
I checked the bruteforce_attempts database table already, as this was my first guess, but this table is empty.

I saw an error message in the top middle popping up, that the time zone “Europe/Berlin” is not valid and that UTC is used instead. However “Europe/Berlin” is mentioned in the admin docs as valid value.
Furthermore I have error messages in the log from the external_storage app, that a resource can’t be connected (which is true, but I removed the external storage before I shutdown the external server and I even deactivated the files_external app).
Maybe you see these problems with you as well and we find something we all have in common and that is causing our problems with the slow web GUI.

In my journal I also see these messages:

Aug 08 11:30:07 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 18 total children
Aug 08 11:30:08 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 0 idle, and 19 total children
Aug 08 11:30:09 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 20 total children
Aug 08 11:30:10 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 21 total children
Aug 08 11:30:11 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 22 total children
Aug 08 11:30:12 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 23 total children
Aug 08 11:30:13 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 24 total children
Aug 08 11:30:14 nextcloud php-fpm[755]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 25 total children

Before the update I had no problems at all with my settings for pm.start_servers and pm.min/max_spare_servers.

pm.start_servers=8
pm.min_spare_servers=1
pm.max_spare_servers=8

From my understanding these values are already pretty high, so I’m wondering why they are not enough and should be increased further.

@MorrisJobke , @nickvergessen can you help here and maybe have a look?

Hi,

JimmyKater had Bruteforce app installed, so…

I had no brute force app installed.
I just installed the brute force app and white listed my proxy and external ip’s.
After that I disabled the bruteforce app.

Also have verified my LDAP settings to make sure these setting were correct.

I also changed my password policy to make sure there was no conflict.

Now I have fast logins again (for now).

wait until a fresh logon with a new browser or after a re-boot of your desktop (or simply the next working day)… and we’ll see

whatever i did so far didn’t solve the problem. it’s still slow. :frowning:

Hi,

oh man I sweat like crazy. For hours my server was unusable. But I fixed it for me now. I connected to the database and removed the external shares:
[mysql] delete from oc_share_external;

I checked before (with select * from oc_share_external;) and only had the two test shares from the Federation Sharing in that table. After I deleted them I checked the other tables that have anything to do with external and federation, but they contained no other data which relates to the two deleted entries.

Now my server is super fast again! :slight_smile: :sunny:

1 Like

@Schmu
thats cool. and so i tried to remove external sharing from my nc. i even had some external storage (google) connected.
and maybe that was the solution. i dunno. as for now it’s pretty fast again. i’m gonna have an eye on it and tell you laters.

after disabeling external sharing/storage my instance runs like hell again… even on the next morning :muscle:

In a different thread with similar problem, the issue was reported to the bug tracker:

Please follow the discussion there.