Thank you very much! This worked very well for me.
Desktop client 3.4.0 destroys local time stamp and keeps uploading data to server
After month i still saw some files with epoch time 1970. Fortunately i had an old backup & was able to restore them all, too.
After that i did run the script:
(see there: nextcloud-gmbh/mtime_fixer_tool_kit: Tool kit to fix the mtime issue on the server state (github.com)
Usage: ./list_problematic_files_on_db.sh <mysql|pgsql> <db_host> <db_user> <db_pwd> <db_name>
… i got only one file which seems to be “bad” on db site :
Does anyone know how to remove it from the database ?
once you changed file on the file system
occ files:scan --all is expected to fix/update all database records
thx - already did it (occ files:scan --all)
command:unsolvable_file.sh from smtime_fixer_tool_kit still shows:
I’m on nextlcoud-client 3.4.4 and I’m expecting timestamp related problems to have been fixed.
Please could someone confirm the expected behaviour:
- If a file is modified on a client, then when that same file reaches the server a little later, the timestamp (modified time) on the server is the time the file was modified on the client (rather than the later time when the modified file reached the server).
- Similarly, in the reverse direction, if any client observes a file is modified on the server, then when that same file reaches the client a little later, the timestamp on the client is the modification time of the file on the server (rather than when that file reached the client).
(I’m not concerned about when the client is in a different time zone, or even if there is a significant difference in the clocks on the server and clients, but such issues should also be handled somehow).
I still have this problem.
On Nextcloud 3.6.0, I had a huge number of error messages with something along the lines of “Sync failed due to invalid modification time”. Those error messages went away after downgrading to 3.3.6.
this is a preventive measure implemented as result from this issue. once you hit the problem you need to resolve manually. This means you need to update file times and you are safe to upgrade. bad modification times will never “auto-correct” so you have to perform a correction to use updated client versions.
It would be great if this problem with the file times could be resolved from within the Desktop client, even if it’s manually. Otherwise, it’s too much effort, honestly, and sticking with version 3.3.6 instead of updating seems the more practical solution, unfortunately.
once you synced files with wrong modification date to the server there is no way to fix it on the client side. the only way I see is to remove everything and re-upload new data to the server - if you bandwidth and data amount let you do so…
How did you all fix this? I do not have access to backups but I do have access to the nextcloud server.
there are scripts above which modify timestamp of the files to ‘today’ - this is the quick and dirty method to “fix” the issue without backups… in #93 I made efforts to document where time stamps are stored in the DB (even after destroying file timestamps) - feel free to develop a working correction script…
Wow! file creation date is preserved in the DB! (I have been able to confirm this using your query). Why does nextcloud not provide a fix then? They caused it! They actually exacerbated the issue by having the client refuse to sync the files. This breaks what the client is supposed to do (to sync files). This is objectively much worse than just having the create date wrong because now the date is wrong and files don’t sync.
- < there are scripts above which modify timestamp of the files to ‘today’
would it not make more sense to use the change date (as reported by the file system) as the create date?
- the script mtime_fixer_tool_kit ( mtime correction tool kit) when run, tells me it would update all files to ‘today’ despite the mdate being in the DB. It does not work, or I do not understand what it is supposed to do.
- Am I understanding this correctly, the real fix would look like this:
get mtime from DB (which your code does). If there are multiple versions of a file, you would only use the oldest version’s mtime and disregard the other versions.
fix the timestamps in the file system by using the mtime from above
run occ scan
Is that it?
this is my understanding as well.
but I would change
to “use the newest valid mtime” - multiple valid version could exists… most of the files don’t change often, so should be no real issue…
What are the current steps a Desktop client user can take to address this timestamp issue? On my Ubuntu laptop (20.04), Nextcloud client is behaving well. However for the past week I have had to pause all syncing to my Mac (10.14.6) since that client insists on re-uploading 300+ gb of files that mostly should be identical to what is on the server. This is a very serious issue that currently prevents me from syncing files between my primary Dell laptop running Ubuntu and secondary older Macbook Pro.
Although I am savvy enough to voluntarily adopt Ubuntu for my primary OS, I am not as fluent yet as I would like at running scripts and queries to manipulate my files. Without inclusive instructions on how to use them, script code snippets are unfortunately not accessible to me to resolve this on my own.
This superfluous re-syncing issues seems to keep coming back across different Nextcloud updates. I pay for Nextcloud storage on hosting.de and do not have capacity to risk this much data flailing with scripts at my skill level. Furthermore, in my building the my only choice for internet is Xfinity cable broadband, which has unreliable bandwidth and I may incur additional fees above my monthly costs if I must remove and re-sync everything whenever this issue arises.
I’m sorry I don’t think there is more to say on this topic. multiple scripts have been published addressing the issue from different sides.
Depending on your specific situation you may want to fix them on the client or on the server and sync to the other device. I don’t think there is a good way to merge two different states (even the data is identical) without lot of manual interactions (which is not very convenient for big file numbers).
At this time, my primary recourse is on the client side since Hosting.de maintains the Nextcloud server I use and they are doing their best. In 2020 I initially hosted my own Nextcloud until a hardware failure hosed that server.
Since this is a major persistent issue with no resolution on the horizon, I urge Nextcloud to put out a cohesive reference document that is easy to find with steps that can be taken on the client side. It’s not reasonable to demand individual users distill various code snippets from a forum thread for a major issue impacting so many.
Please, consider if what you are asking from users is inclusive. The goal is reliable software that most people can use and be able to solve common issues themselves through documentation that is efficient at communicating solutions. At this rate of problems with the only promising solutions scattered across years of an unresolved issue thread, I anticipate having to move cloud backups to a different platform when my guthaben (prepaid credit) is exhausted - after having arrived at Nextcloud after service deteriorated on Google Drive and Dropbox.
I wrote a script that takes wwe’s SQL query and puts the mtime from the DB back into the file system. All my files sync now and have their proper create date! https://github.com/HappyRogue658/fix-nextcloud-file-creation-date
@BenHastings very cool, this I what I would have expected as solution.
btw: Cool you use PowerShell on Linux, I always forget it is available!
@wwe I couldn’t have done it without your SQL code because I don’t know SQL. Thank you! I’m not a developer. I’m just a mostly Windows admin. I thought I stick with what I know (PowerShell). I’m just glad that all my files sync again, plus the added bonus that the file create date is real.
@wwe @BenHastings I LOOOOOVE Open Source and when it gets collaborative like that: