Desktop client 3.4.0 destroys local time stamp and keeps uploading data to server

Removed all history (files and profiles) of the existing NC on the PC and reinstalled the latest version, . Now it works ok!

1 Like

Can somebody help me recover to a clean state so I can safely upgrade my Nextcloud + MariaDB?

I did use parts of the mtime fix scripts supplied here. One problem is the old MariaDB which has no FROM_BASE64 command.

So it looks like I do have

Folders which have more files locally than remotely. Mtime errors all over the place (3000+ files)
What I did manually now:

On the local (windows) machine in Ubuntu (WSL2):

find . -type f ! -newermt "@86400" -size +3c -exec touch -c -r ./testi.md {} \;

I also saved the file list just in case. These takes all files with at least 3 bytes and gave them a date from August 2020.

On the server I did the same but with current time.

I then did a occ files:scan and also the more important groupfolders:scan - it looks like you cannot call the group-folders scan with just one file path so it goes through everything.

I still do get some invalid modification times error on the latest sync and also the more worrying at least 350+ times:

Server hat "412 Precondition failed" auf "PUT https://SERVER/remote.php/dav/files/USER/PATH/file.ext" geantwortet (An If-Match header was specified, but none of the specified ETags matched.)

What is that error and how can I fix it.

P.S: Due to some bad architecture decisions I am still stuck at v20.0.6 - but before upgrading by migrating to an official Docker image I’d like to get this sync (only 3 windows clients) to safely execute…

For some reason Nextcloud GmbH decided not to help users with a real fix but just offered some half solutions as you mentioned above… additionally they added some dead-ends to the sync process like the “fix” to not replicate files with invalid timestamp, without offering a good way to fix such files…

at the end if you don’t have good backup to recover valid state you are limited to manually fix all the issues one by one before you continue… 350 files is a doable amount… nothing one appreciates but it could be done… as far I remember files with valid timestamp on the server have preference over files with invalid timestamp stored on the client - you could touchthe files, upload them into server file system and run occ file:scan… this should fix the issue at the cost of finally loosing mtime…

1 Like

I’m really leery now of this stuff, installed a fresh NC 23 and using the 3.4.1 client I don’t see stuff get synced reliably. I just don’t have anything specific to point at except “not all files get synced” which is not easy to pinpoint. However, never happened with earlier versions, so there’s that. I’m thrilled about not implementing NC in the company at least, that would have been bad with this level of reliability, potentially a bunch of users complaining of lost files, what a nightmare.

1 Like

Update: So far so good. I synced one of our 3 windows nextcloud clients

with my idea above, namely

  • manual find on server with all mtime error files
  • touch with current time for all these files
  • occ files scan and groupfolder scan

above it seems to have synced. Dont know if the find & mtime on the windows machine helped or not. Of course I lost the last modification time for all these 3000+ files, but if something were missing or newer on the client (and overwritten with server version) I see that it would show up in the activity log so I would be able to manually restore it from backups.

One directory did completely strike on sync. I deleted it and even had to copy it from backup & change the name on first sync. Afterwards I renamed it back to the original name and it worked.
my qnap documentaion word file I had open the whole time also had an irrecoverable conflict, so i deleted all, slightly renamed and re-uploaded it.

Now I am moving on to the second client …

BTW: 3.4.2 arrived yesterday: Releases · nextcloud/desktop · GitHub and also 23.0.1 with these commits:

23.0.1 is not offered via the updater yet, but you can find it here: https://download.nextcloud.com/server/releases/

This one is interesting: Prevent writing invalid mtime · nextcloud/server@36bacaa · GitHub

+1 same problem, same solution.

Hi! How did you solve it?
I’m still stuck with all the errors.
Some of the users are still using older versions of the client - can this be a problem?
Thanks!
Best regards!

I would also love to get some help on this!

The nextcloud provided scripts are terrible and do not work.

I have a large installation with 33 staff and well over 500k files in group folders etc… The 3.4.0 client trashed this system, I have managed to run a command to find all files with an invalid date, then touch them, however I am not sure this has done the job!

Terrible effort from Nextcloud for such a massive issue.

Downgrading to 3.3.6 solved the problem for me

Hi!

Please find attached my debug-log

(Attachment nextcloudclient-debug.zip is missing)

Please do not recommend unreleased versions.

Please see

All my users are reporting sync errors now.
22.2.3
3.4.2 (all win 10)
error: invalid modified time reported by server

How can this issue persist after 2 months?

1 Like

once the invalid mtime was applied to your files and synced to the server you need to correct it first. If you didn’t perform any corrective action like restore or manual mtime update with touch as described by different users newer clients (starting from 3.4.1?) will deny to sync files with invalid mtime from server…

in other words: the issue would not happen anymore with newer client and server (each part fixing it’s own side by declining invalid mtime, client will prefer files with valid mtime on server side) but existing damaged files must be recovered using manual process, it is not sufficient to update the components.

Mmmmmh, i am not sure if i got this correctly, but:

  1. The changelog of 3.4.2 does not mention that this bug was adressed
  1. Only the changelog of 23.01 mentions a bugfix regarding changed mtimes
  1. 23.01. is not released yet, so 22.2.3 is still vulnerable

IMHO, this bug depend on virtual files, but i can not prove it and the communication of the dev is …

Regards,
A.

the fix was in 3.4.1

wrong - users on Linux and MacOS where affected as well (e.g. without VFS support).

somewhat true - as the problem is located in the client - each server version is safe as long your client doesn’t brake data, newer server versions just add additional protection.

1 Like

I spend more time on this and created a SQL query collecting all files with invalid (1970-01-01) file mtime from the DB and checking for files_versions of this files in the DB with their change time and original mtime.
one can use the output to correct the mtime easily (if your DB still has file versions entries).

Warning:

  • At the moment I don’t know if it work when multiple versions of the damaged file exist (don’t have in my playground), simple sort/limit should choose good version but need verification!
SELECT f.fileid
	,fv.fileid AS version_fileid
	,f.fspath
	,fv.fspath AS version_fspath
	,versionpath
	,f.size
	,versionsize
	,f.mtime
	,f.filetime
	,fv.change_time
	,fv.original_mtime

FROM (SELECT STORAGE,fileid
	,path
	, TRIM(LEADING 'files/' FROM path) AS fspath
	,size
	,mtime
	, FROM_UNIXTIME(mtime) AS filetime
FROM oc_filecache
WHERE path LIKE 'files/%' AND mtime=0 
#AND fileid=308330
#LIMIT 10
) f

JOIN (SELECT fileid
	, SUBSTRING_INDEX(TRIM(LEADING 'files_versions/' FROM path),'.v',1) AS fspath
	,path as versionpath
	,size AS versionsize
	,mtime
	,FROM_UNIXTIME(mtime) AS change_time
	,SUBSTRING_INDEX(name,'.v',-1) AS original_mtime
	,FROM_UNIXTIME(SUBSTRING_INDEX(name,'.v',-1)) AS original_time
	FROM oc_filecache
WHERE path LIKE 'files_versions/%.v%') fv
ON f.fspath=fv.fspath

uncommend the lines

#AND fileid=308330
#LIMIT 10

to test the query on your known broken files and limit to few results only as long you are searching for solution.

P.S. don’t forget to perform occ files scan:all if you only touch the file on the disk.

Does none but me have the problem of files and folders 84 years in the future (end of Unix time on 32-bit systems)?
From Wikipedia

For example, changing time_t to an unsigned 32-bit integer, which would extend the range to 2106 (specifically, 06:28:15 UTC on Sunday, 7 February 2106), would adversely affect programs that store, retrieve, or manipulate dates prior to 1970, as such dates are represented by negative numbers

I think few users reported times in the future, but the root cause must be the same (maybe turn from 0 into max int32 value due to buffer overflow when doing time zone calculations). If you are lucky and your server keeps files versions of your files you can start fixing using the sql query from above.

Hi. That’s how I managed to solve this

Fitst of all you need to find all the files with destroyed timestamp

find yournextcloudfilelocation/ ! -newermt 1970-01-02 > damaged.txt

then use this script to touch and modify the date. You can set whatewer the date you want just change 15 jan

#!/bin/bash

file="damaged.tx"

while read -r line; do
    touch -d '15 jan' "$line"
done <$file

Then If you use docker

sudo docker exec --user www-data your containerID php occ files:scan --all

if you don’t use docker then

occ files:scan --all

after that you need to sync all your clients from scratch

4 Likes