Desktop client 3.4.0 destroys local time stamp and keeps uploading data to server

I absolutely agree and Iā€™m aware of the possible steps to repro the issueā€¦ one could start with a virtual machine and NC instance without production files, check if the problem exists, go a step further to a fake user on your production instance with your production client and finally end with your production user and production data:

each additional step you add to improve security and avoid data loss cost you time and resources and if the problem strikes at very last stage when you trust the update and test on your production data you end on in the same situation - hours of recovery work - additionally to all the testing beforeā€¦

chances exist you find the issue running all the dry tests with fake user/fake data before you touch the production data - but the question who is willing to spend days of his spare time (and at same time is well trained to understand and document the tests in a good way)ā€¦

and Nextcloud could run all this dry run tests on their own (hopefully they do)ā€¦ Nextcloud already has to run this tests for their paying customersā€¦ I feel little unfair they offload this time-consuming, dirty work to the community, without providing good support and recovery optionsā€¦

In the other discussion, about another legal entity, Jos and others always talked about money and resources - this is exactly the right discussion - Iā€™m glad to give away a portion of my time and expertise for the community but Iā€™m not willing to work as full time test engineer without reasonable returnā€¦

@jospoortvliet Iā€™m ready to talk about a deal - I spent my time for comprehensive testing in exchange to ā€œcreditsā€ I can choose features/bugs/improvements I would like to push from my side. We definitely can negotiate good quota of my time vs. NC timeā€¦

You asked how to avoid to spend 3h of restoring data. And just taking a fake user on your productive setup (and perhaps your local system), does it take 30 min? Not hours. You are not supposed to do full testing and you donā€™t need to set up x virtual machines. You just want an idea how your productive user might react with a fair chance.

What about the people spending their time helping other users, working on the documentation, translation etc.?

For certain bugs there is a monetary compensation if you report them via hackerone.

I do it as well!

If would be looking for money I would not spend my time hereā€¦

no it is not true. The condition triggering the bug must be somehow ambiguous and might only happen with production user and reasonable amount of data otherwise this would be a shame such a serious bug passed QA.

little testing under lab conditions doesnā€™t help in this situation - the test with fake user/data without hitting the problem shows absolutely nothing: the bug might still exist it is simply not triggered in this specific setupā€¦ only in case you are lucky and the problem occurs in your lab setup you get advantage from the testā€¦ otherwise itā€™s complete waste of time. to become confident the problem is fixed you must exactly reproduce the the setup you hit the issue before (even this doesnā€™t provide 100% guarantee)ā€¦

My main goal is to change the focus on how we can help people who still experience the issue or might hit it later (maybe during testing) to recover from the problem.

I had no resync effect wit the 3.4er version. Is there any short way to recognize if the issue occurs in my environment with 3.4.1?

I mean searching logs for specific entries; checking attribute of files on the serverside.

It doesnā€™t happen to everyone, clearly. I donā€™t really know why - Iā€™m not really technical enough and I donā€™t know exactly what the cause of the problem was. The issues weā€™ve seen on our server (0 byte files and wrong modification dates, like, 50 years in the past) are not so hard to find, though.

Another update which should fix the problems with weird dates:

šŸŖŸ Windows: https://cloud.nextcloud.com/s/b6wdJktaP9PtnHN
:penguin: Linux: https://cloud.nextcloud.com/s/WH6LmjDoxaYW6by
:apple: Mac OS X: Nextcloud

1 Like

Hello
the method I have used is far from perfect.
I did query the database to get all files with invalid dates.

mysql <database name>
select path from oc_filecache where mtime <= 0;

from this I generated a script to put valid dates (but not from backup due to me wanting to be quick)

touch -c <filename reported by the query>

the hard thing is that path returned by the first query is missing the path to storage and I do not have an easy way to solve that
at the end I just trigger a scan of the files to update again the database

sudo -u www-data php <path to nextcloud>/occ files:scan
3 Likes

thank you @mgallien for this starting point. As long I understand right you reset the invalid time stamp of the the file to nowā€¦ I was looking for more complex approach like restoring versions and recover the original file date

I would try to script something over the weekend - could you help me and explain where in the DB the information about file versions is stored and how the original file and itā€™s versions are linked together? Is there some docs about database schema available, I didnā€™t found anythingā€¦

Regarding the storage path, in oc_filecache there is a column storage which shows the numeric_id from oc_strorages table - both pieces allow can be combined to build full path

I didnā€™t have serious disruption on 3.4.0, but I did have several folders resync, when they were already up to date. This has not recurred on 3.4.1RC1, for either of the two builds posted. All run on Win10 Enterprise 2004.

I did not have any issues on 3.4.0 with server 21.0.7 without virtual files, and i do not have any with ā€¦ Nextcloud-bugfix-3.4.1RC1-build-8524-unbranded.AppImage

1 Like

Can confirm this was effecting me on server 22.2.0

I tryed hard to find a universal solution to link the file damaged by the sync with valid versions and allow subsequent recoveryā€¦ at the moment my sql skills only allow this summary of intermediate steps which hopefully helps others. for some really strange reason there is no clear relation (in terms of unique ID) between the files and their versionsā€¦ both are listed in the oc_filecache tableā€¦ the only difference is

  • real files have prefix files/
  • while versions have files_versions/ prefix and .v<unixepoch> suffix in the path column

so in my eyes the only way to detect a file and itā€™s version belong together is to strip files/ prefix and search for records with prefix files_versions/ and same value at the end (and strip the suffix after .v).

here I explain the procedure with one file I found with the search term @mgallien provided:

image

# relevant data from oc_filecache for files with invalid mtime
select storage,fileid,trim(LEADING 'files/' FROM path) as path,size,mtime,from_unixtime(mtime) as mtime from oc_filecache WHERE path like 'files/%' and mtime=0 and fileid=308330 \G;

this query shows one file with the id 308330 - skip the and fileid.. to see all files with invalid mtime and skip \G to see the results as tableā€¦ some hints:

  • trim(LEADING ā€˜files/ā€™ FROM path)
    shows the path of the original file without files/ prefix
  • from_unixtime(mtime) as mtime
    converts unix epoch to human readable time
  • path like ā€˜files/%ā€™ and mtime=0
    lists regular files with invalid change time
  • and fileid=308330
    limits results to one specific file
  • \G
    makes results appear as list rather than table
# file version for specific file path collectected from above query
select storage,fileid,SUBSTRING_INDEX(trim(LEADING 'files_versions/' FROM path),'.v',1) as original_path,path,size,mtime as change_mtime,from_unixtime(mtime) as change_time,from_unixtime(SUBSTRING_INDEX(name,'.v',-1)) as original_mtime FROM oc_filecache WHERE path like CONCAT('files_versions/','Documents/PowerShell/PowerShell_Advanced_Kurs_2019/...eRequired/--Switch.txt','.v%') \G;
  • SUBSTRING_INDEX(trim(LEADING ā€˜files_versions/ā€™ FROM path),ā€™.vā€™,1) as original_path
    shows the raw value of path (exactly the same as path in the above query
  • from_unixtime(SUBSTRING_INDEX(name,ā€™.vā€™,-1)) as original_mtime
    the suffix of the path value shows when this file version was created (mtime of the original file)
  • WHERE path like CONCAT(ā€˜files_versions/ā€™,ā€˜Documents/PowerShell/PowerShell_Advanced_Kurs_2019/ā€¦eRequired/ā€“Switch.txtā€™,ā€™.v%ā€™)
    filters the table for filename from the above query but prepernds the files_versions/ prefix and .v... suffix - in my case there is only one version but depending on how long desktop client 3.4.0 was syncing you might have multiple. I used CONCAT so you can directly feed path from the above query as middle parameter

At the moment my SQL skills are not enough to construct recovery action from this findings which is valid for different architectures but with this starting point itā€™s not hard script something using your preferred scripting/programming tool something which

  • builds the list of affected files using the first query
  • repeats the second query using the file paths from the first result as input and
    ā€“ collects a list of existing file versions
  • depending on your approach and skills
    ā€“ either extract correct modification dates from the versions and change the creation/modification date on the files
    ā€“ or move the version you like to the original location and recover by rinning occ files:scan
2 Likes

Iā€™m not sure if I should report results from 3.4.1 RC here on in the Github issue? (https://github.com/nextcloud/desktop/issues/4016)

Iā€™m using 3.4.1 RC for a few days, and I meet a problem today. I see a lot of these error messages:

XXXXXX\file.xlsx has invalid modified time reported by server. Do not save it.

Testing related problems are best to be reported directly to the bug tracker.

Installed 3.4.1 from github and run into the issue. Unfortunately, the files were deleted from Nextcloud. No data loss, since backup was available.

image

Hi, macOS user here (12.1) and nextcloud 3.4.1

I get errors like:

xxx has invalid modified time reported by server. Do not save it.

On the server, all the affected files are shown as created Jan 1 1970

me tooā€¦
Get Nexcloud client 3.4.1 and that is what i get:
image

@jospoortvliet itā€™s a shame 20d after the issue was identified there is no guidance from Nextcloud GmbH how to recover from the issueā€¦ it might be no small issue to restore a private instance, but what is about small business? with 10 users, 2-3 of them affected by the issue? how should they respond to the problem?

I feel really bad customers left alone with the problemā€¦ and the only official person participating this thread is a marketing guyā€¦ kudos for @jospoortvliet and really bad mood against tech personnel who prefer to keep hidden behind the ā€œforum firewallā€

1 Like

For a possible fix take a look at the following wiki page:

2 Likes

Hi Mathieu,

I tried your shell script and I got a lot of errors from mysql query. Because some directories have single quote in name.

Moreover, the script only browses some files of the first user, it doesnā€™t go to the second one nor to the groupfolders.

Output error:
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ā€˜Ć©cran/iOS-14.2-wallpaper-LAke-The-Cliff-Light-Mode.jpgā€™ā€™ at line 2

I am also getting this ā€œā€¦has invalid time reported by server. Do not save itā€ warning for multiple files, every time I sync. That is based off a clean install of 3.4.1 (done yesterday), on a clean install of Windows 11 (done the day before).

The frustrating thing is that Nextcloud is giving me no information or guidance on what to do next. How can I save it or not save it? The error shows but I can have no further interaction with it.

1 Like