Desktop client 3.4.0 destroys local time stamp and keeps uploading data to server

+1 same problem, same solution.

Hi! How did you solve it?
Iā€™m still stuck with all the errors.
Some of the users are still using older versions of the client - can this be a problem?
Thanks!
Best regards!

I would also love to get some help on this!

The nextcloud provided scripts are terrible and do not work.

I have a large installation with 33 staff and well over 500k files in group folders etcā€¦ The 3.4.0 client trashed this system, I have managed to run a command to find all files with an invalid date, then touch them, however I am not sure this has done the job!

Terrible effort from Nextcloud for such a massive issue.

Downgrading to 3.3.6 solved the problem for me

Hi!

Please find attached my debug-log

(Attachment nextcloudclient-debug.zip is missing)

Please do not recommend unreleased versions.

Please see

All my users are reporting sync errors now.
22.2.3
3.4.2 (all win 10)
error: invalid modified time reported by server

How can this issue persist after 2 months?

1 Like

once the invalid mtime was applied to your files and synced to the server you need to correct it first. If you didnā€™t perform any corrective action like restore or manual mtime update with touch as described by different users newer clients (starting from 3.4.1?) will deny to sync files with invalid mtime from serverā€¦

in other words: the issue would not happen anymore with newer client and server (each part fixing itā€™s own side by declining invalid mtime, client will prefer files with valid mtime on server side) but existing damaged files must be recovered using manual process, it is not sufficient to update the components.

Mmmmmh, i am not sure if i got this correctly, but:

  1. The changelog of 3.4.2 does not mention that this bug was adressed
  1. Only the changelog of 23.01 mentions a bugfix regarding changed mtimes
  1. 23.01. is not released yet, so 22.2.3 is still vulnerable

IMHO, this bug depend on virtual files, but i can not prove it and the communication of the dev is ā€¦

Regards,
A.

the fix was in 3.4.1

wrong - users on Linux and MacOS where affected as well (e.g. without VFS support).

somewhat true - as the problem is located in the client - each server version is safe as long your client doesnā€™t brake data, newer server versions just add additional protection.

1 Like

I spend more time on this and created a SQL query collecting all files with invalid (1970-01-01) file mtime from the DB and checking for files_versions of this files in the DB with their change time and original mtime.
one can use the output to correct the mtime easily (if your DB still has file versions entries).

Warning:

  • At the moment I donā€™t know if it work when multiple versions of the damaged file exist (donā€™t have in my playground), simple sort/limit should choose good version but need verification!
SELECT f.fileid
	,fv.fileid AS version_fileid
	,f.fspath
	,fv.fspath AS version_fspath
	,versionpath
	,f.size
	,versionsize
	,f.mtime
	,f.filetime
	,fv.change_time
	,fv.original_mtime

FROM (SELECT STORAGE,fileid
	,path
	, TRIM(LEADING 'files/' FROM path) AS fspath
	,size
	,mtime
	, FROM_UNIXTIME(mtime) AS filetime
FROM oc_filecache
WHERE path LIKE 'files/%' AND mtime=0 
#AND fileid=308330
#LIMIT 10
) f

JOIN (SELECT fileid
	, SUBSTRING_INDEX(TRIM(LEADING 'files_versions/' FROM path),'.v',1) AS fspath
	,path as versionpath
	,size AS versionsize
	,mtime
	,FROM_UNIXTIME(mtime) AS change_time
	,SUBSTRING_INDEX(name,'.v',-1) AS original_mtime
	,FROM_UNIXTIME(SUBSTRING_INDEX(name,'.v',-1)) AS original_time
	FROM oc_filecache
WHERE path LIKE 'files_versions/%.v%') fv
ON f.fspath=fv.fspath

uncommend the lines

#AND fileid=308330
#LIMIT 10

to test the query on your known broken files and limit to few results only as long you are searching for solution.

P.S. donā€™t forget to perform occ files scan:all if you only touch the file on the disk.

Does none but me have the problem of files and folders 84 years in the future (end of Unix time on 32-bit systems)?
From Wikipedia

For example, changing time_t to an unsigned 32-bit integer, which would extend the range to 2106 (specifically, 06:28:15 UTC on Sunday, 7 February 2106), would adversely affect programs that store, retrieve, or manipulate dates prior to 1970, as such dates are represented by negative numbers

I think few users reported times in the future, but the root cause must be the same (maybe turn from 0 into max int32 value due to buffer overflow when doing time zone calculations). If you are lucky and your server keeps files versions of your files you can start fixing using the sql query from above.

Hi. Thatā€™s how I managed to solve this

Fitst of all you need to find all the files with destroyed timestamp

find yournextcloudfilelocation/ ! -newermt 1970-01-02 > damaged.txt

then use this script to touch and modify the date. You can set whatewer the date you want just change 15 jan

#!/bin/bash

file="damaged.tx"

while read -r line; do
    touch -d '15 jan' "$line"
done <$file

Then If you use docker

sudo docker exec --user www-data your containerID php occ files:scan --all

if you donā€™t use docker then

occ files:scan --all

after that you need to sync all your clients from scratch

4 Likes

Had the same problem seen today and identified it the same way.
Did touch them followed by a scan of the user and it did fix the problem.
Iā€™m going to do this on the 3ā€™000 other files tomorrow with the same script as above.
We pushed 3.4.1 on each desktop so we should not face the problem anymore. Butā€¦ we have no control on the users who manually installed 3.4.0 without knowing it was a problem. And they may continue to break those files. At least this time weā€™ll know where they are and who is behind them and weā€™ll ask them to upgrade to > 3.4.0

Thanks for posting this - I am here as a helpless user who does not know how to use Linux but can follow detailed instructions.

After two months, I am still having huge difficulties with residual files that have had their timestamps destroyed and thus, am unable to sync my desktop clients. Itā€™s a mess and probably I am making things worse in trying to get my regular work done and also see if there are any fixes. I have emergency backups all over the place but as time goes on, things diverge and itā€™s very hard to manage.

I installed version 3.4.2 for Windows on my two desktop and one laptop computers.

Given I have no support from my workplace who had installed the server software as apparently I am one of very few people who had installed the client anyhow and even fewer who complains about this issue (if not the only one), what do you more savvy folks suggest I do?

P.S. I had a sympathetic person from the university IT say that they couldnā€™t offer support but I suppose if there are detailed instructions for a patch they need to do on the server end to find the files with invalid times, not something I can do to help myself, I could transmit it and hope they apply it. Thank you!

There are few attempts to fix the files from different perspectives - see previous post and linked github issuesā€¦ having access to client only makes it much harder. But from the beginning the only doable solution was to restore a backup. There will be no solution to restore time stamps using the client side only.

I would proceed as following:

  1. backup the current state
  2. remove all the files (and sync to the server)
  3. restore your last known-good file state
  4. move/review files from 1. which are newer the backup in 3.

been limited to client-only access this is maybe the best option to fix your data. if you donā€™t care about real time stamps just search for the files with invalid creation date and reset it to nowā€¦

1 Like

Thank you very much! This worked very well for me.

After month i still saw some files with epoch time 1970. Fortunately i had an old backup & was able to restore them all, too.

After that i did run the script:
(see there: nextcloud-gmbh/mtime_fixer_tool_kit: Tool kit to fix the mtime issue on the server state (github.com)

Usage: ./list_problematic_files_on_db.sh <mysql|pgsql> <db_host> <db_user> <db_pwd> <db_name>

ā€¦ i got only one file which seems to be ā€œbadā€ on db site :

local::/ncpath/badfile

Does anyone know how to remove it from the database ?

once you changed file on the file system occ files:scan --all is expected to fix/update all database records