Nextcloud Syncproblems


Operating system :Ubuntu 18.04
**Web server:Apache 2.4.29
**Database:pgsql Ver 9.4
**PHP version:7.2.15
**Nextcloud version:16.0.4
**Desktopclient 2.5.0++
**Client OS:Windows 10/ Windows 7
**SMB Server 2016

All the files have to be stored on the windows fileserver.
To prevent to save data on the linux filesystem, the userhome is not writable.
The external store is conected with the usercontext with credentials saved in database.

The diferent fileshares are synct with the desktopclient to die users.
If user A changes a file on a sublevel of the schare, than the file will changed on the server.
BUT user B and user C will not sync the changed file because for them nothing has changed.
If user B or C will do some changes in the same folder, the client will recognize changes of user A and sync.
If user B or C will edit the file of user A the changes of user A will be lost and user A will also get no sync of the changes of the other user!

If this command is running for each user, than the will sync properly

sudo -u www-data php -f /var/www/nextcloud/occ files:scan --path="/username/files/FILESHARENAME/SUBFOLDER FILESHARE" -v

The problem is, this command takes extremely long if the fileshare is huge and even longer if it runs parallel for all users.

The command

sudo -u www-data php -f /var/www/nextcloud/occ files:scan --all

takes also long but only works if the homdirectory of the users is writable (and on the Linux filesystem has to be empty)
The option --unscanned does not work.

If the files:scan --all --unscanned job would work properly even the homedirectory is not writable would be great.

Can someone help please?

kind regards

Chonta

I’m not sure, but maybe setting up Cron for the background jobs might help, if not done already. Link goes to docs for version 15. 16 has broken images for me right now.

Edit: This seems to be your problem (with a solution to it?)

Hello Gee858eeG,

thx for your reply.

‘filesystem_check_changes’ => 1,
this is already set in the config.
Cronjobs are running but this is not practicable.
At first the file:scan --all works only with unnecessary writable /data/username/files directory.
And than it will also run for each user for a share that has more than one TB of data.
Run if you have to run this for 10++ user and they all will work with the shares and change data you are doomed because it will take to long.

It is expected that the client will inform the server about the changes og user A and the server should automatically share this information wit all other users that have access to that file.
Or at least the server should inform all other clients about changes of user A and than the clients of user B and C will lookup if they have access on the filesystem.

As long you work only with the web browser and hit always F5 you have no problem, but this is far from realty work-processes especially if you have not always an internet connection but you need data synced.

I understand your problem. I didn’t mean to add a new Cronjob but just to configure nextcloud to use cron instead of Ajax for its background jobs. If you follow the link above, it will be clearer for you. I’m not sure if that will detect changes from external storages, but it might be worth a shot

since the nextcloud is running: */15 * * * * php -f /var/www/nextcloud/cron.php
is in the crontab ow www-data

*/15 * * * * php -f /var/www/nextcloud/occ files:scan --all
This also but work only under the circumstances that die userhome is writable and the time it needs will increase exorbitant with the amount of data to scan and the number of users.

The problem is, it worked some versions before (nextcloud 11) but since 12 there horroble qualety problems with the mainfeature of syncing data from an external store.
Atleast there is only one user that is working with the data ok, but the moment you have two users that use the same data, you can not count on the data that is provided from the nextcloudserver to your sync client.
So the server it self is broken or the client software is it or worse both.

If I understand correctly, you try to use nextcloud without local file storage on the server. That’s why you use only external storages.
I think that nextcloud doesn’t really fit your requirements. I don’t think that external storages will ever be implemented in a way, that the external storage pushes informations about changes in its file system to the server. It will probably always be a pull action from the server, because otherwise the external storage (e.g. a Samba share) would have to “know” that it needs to push the new information to an external service (the nextcloud server). That would again break the idea of just adding external storages without touching the host of the external storage. That’s why you have to use occ files:scan actively on the nextcloud server.
So all in all, I think Your wish is not possible to fulfil. The best way would probably be to buy big hdd’s to centralize the data at your nextcloud server and use it “normally”. So you would avoid external storages for files that change with high frequency

I think that nextcloud doesn’t really fit your requirements.

It fit beside this bugs.

I don’t think that external storages will ever be implemented in a way, that the external storage pushes informations about changes in its file system to the server.

Using it the way i described would it make more usable for company’s that have the need of a Cloud but the historical grown Windows Fileserver.
And the main problem ist not a push information for external storage it is a pushnotifikation for an change that was initiated over a Nextcloudclient from another user.
The server should always konow who was changing a file and tell the clients, the clients of the other user should than try to access the file, if yes good sync if not ok no access.

otherwise the external storage (e.g. a Samba share) would have to “know” that it needs to push the new information to an external service

For the sync of changes that only made over nextcloudclients and webbrowser the external storage hast to know nothing, because all hast to be managed from the nextcloudserver istslef.
Becaus the nextcloudserver has to know what user A a was changing and B was not so B needs the information.

The agent idea is only optional but would make the system way much mor relaiable.

That’s why you have to use occ files:scan actively on the nextcloud server.

Ok but than this should not run for each user, it should run only once in fileshareadminkontext and than have a list of all changed files in the database and than die clients get a notification for all files that the need to sync.

So all in all, I think Your wish is not possible to fulfil

Why not?
For example the thing about user B get changes of user A was working before…

The best way would probably be to buy big hdd’s to centralize the data at your nextcloud server and use it “normally”

That is for home users and startups but not suitable for companys with a lot of different fileservers and TB of Data that need to be shared in one way or another.
The big advantage of the nextcloud is to have externalstors all over the world and each of different kind. Only one login to have access and share data it is brilliant.
But sad that the sync of changes over the own desktopap does not work properly.

I’m not sure if I understand correctly… In the original post you wrote:

Can you elaborate how the external storage is accessed in nextcloud? How many external storages have you added and how are they assigned amongst users?

However regarding your following statement:

I think this is the key to your problem. A push must come from the external storage for the nextcloud clients to track the changes. If the nextcloud client did the push itself, then yes, you would not have a problem in your situation. But that would be only as long as you make sure, that the external storage is only altered through nextcloud clients, which usually you can’t. Otherwise we would again have the same problem as before, that changed files are not pushed to other clients. And I believe that’s why this is not implemented and a pull cron is recommended. It’s just not how external storage should be designed. Your use case requires its own kind of data handling and I don’t think this is going to happen soon. That’s why I recommended you to go the conventional way with local storage

Yes it is only changeable trough nextcloudclients :slight_smile:
The other described situation is only optional.

Can you elaborate how the external storage is accessed in nextcloud? How many external storages have you added and how are they assigned amongst users?

i cannot follow :-).
The number of external shares does not matter.
The settings are credentials are saved in database, so every user will use there own logindata (AD) and than have only the access that is allowed on NTFS Level of the Fileserver.
Everything works, only the communikation between nextcloudserver and nextcloud clients about all changes that was made does only work for each user alone.
So if it is a single user nextcloud no problem at all but hte moment you have two users it does not work.

Hi All,

Did anyone find any solutions for this?

i am facing the same problems.

Me too. I’m using Rclone and I’d like to upload files to the remote storage and it appears in Nextcloud. Only option now seeems scan --all, because even --unscanned doesn’t work.