Is that possible to to prefer not server or local version, but simply LATEST file versions?
Use case:
CGI production. Shot is a sequence of JPGs (70-600 files per shot) Every shot goes through iterations constantly overriding older version with new version, because through these sequences it’s connected to editing software, no need to store every single version, it will be a lot of garbage and a hustle to relink it every time to editing.
I don’t understand how server’s version or local version should be more authoritative than the other, it doesn’t make any sense.
If many people are working on the same shot, it shouldn’t be a problem to keep LATEST version. If there’s an issue with overriding, it is managing issue, it shouldn’t be resolved by sync software.
Now, when I’m rewriting sequences, Nextcloud asks me about these conflicts, which makes it unusable. Dropbox resolving this correctly with latest being more authoritative.
Can I tweak something to AUTOMATICALLY prefer latest versions of the files?
The latest copy is the one that gets synced. However it sounds like you have people editing the same file and then trying to sync it, which creates a conflict.
Say you have a picture and two people both open it in GIMP. They both save it within 10 sec. The one who saved their in-editor copy last would have the latest date and would therefore overwrite (wipe out) the version the other person just saved. If not for the conflict detection, one person’s work would simply be lost. Same as would happen on any kind of network file share.
Thanks for clarification, Karl. I was the only user editing files for now, something created lots of conflicts, good to hear that expected behavior is implemented. I’ll try to figure out “what not to do” then in my scenario.
That’s not what you described above (“many people are working on the same shot”), so I’m not sure what to tell you. A conflict occurs when there are simultaneous edits of the same file.
Yes, you’re right, I’m searching for a scalable workflow to work with many people, but right now I’m testing it on my own. I hope I’m not stuck with dropbox (which is working predictable and correctly throughout the years, but has cons).
I’m saving sequence of files (e.g. 150-800 individual files) and while working need to override them several times, NOT AT THE SAME TIME, it is iterative process: write cache → look → adjust parameters → rewrite to the same files → repeat, until satisfied.
And currently rewriting always creates conflicts that need to be resolved by user one by one, making nextcloud unusable to the point I have to move the current project back to dropbox, which handles this process perfectly.
I might be doing something unexpected for nextcloud, I’d love to figure it out.
Setup I’m trying to build: every individual project has its own s3 bucket, mounted inside of the folder structure created in nextcloud, which is a very exciting and unique possibility.
Is that possible to achieve expected behaviour (iterative rewriting of files without creating conflicts)? Maybe it’s a matter of configuration? What can I try?
Okay so what I can tell you is that you can save and overwrite a file and it should not create a conflict. It should just update the file. So there is something odd going on with your particular setup.
I’m not sure where to begin looking for it. Maybe something with the file system or an external storage mount if you’re using one. Maybe you could start with providing some details about your setup, fill out the support post template, etc.