Document xxx has not been reset, as it has unsaved changes

Nextcloud version (eg, 20.0.5): 23.0.10
Operating system and version (eg, Ubuntu 20.04): 20.04
Apache or nginx version (eg, Apache 2.4.25): 2.4.54
PHP version (eg, 7.4): 7.4.32

The issue you are facing:

Is this the first time you’ve seen this error? (Y/N): N

Steps to replicate it:

  1. Go to Settings → logging and see multiple:
  2. Check logs and see that it is continues already for many days:
grep "10534193" /var/nextcloud/data/nextcloud.* | wc -l
1614
# Check archived logs
zgrep "10534193" /var/nextcloud/data/nextcloud.*.gz | wc -l
4316

The output of your Nextcloud log in Admin > Logging:

{"reqId":"9ZC9hCyF9UW4k7W6vuK3","level":1,"time":"2022-10-19T09:20:02+00:00","remoteAddr":"","user":"--","app":"text","method":"","url":"--","message":"Document 10534193 has not been reset, as it has unsaved changes","userAgent":"--","version":"23.0.10.1","data":{"app":"text"},"id":"634fc878a27bc"}

How to find out what is the document?
How to “save” changes to get rid of an error?

Error is still there. Any thoughts?

I have the same issue and because I found no solution I also post here.

I found the file by diving in the database and search for it’s xxx using adminer.php a free tool to browse MySQL it’s sql command is
‘’’
SELECT *
FROM oc_activity
WHERE (activity_id LIKE ‘%2089078%’ OR timestamp LIKE ‘%2089078%’ OR priority LIKE ‘%2089078%’ OR type LIKE ‘%2089078%’ OR user LIKE ‘%2089078%’ OR affecteduser LIKE ‘%2089078%’ OR app LIKE ‘%2089078%’ OR subject LIKE ‘%2089078%’ OR subjectparams LIKE ‘%2089078%’ OR message LIKE ‘%2089078%’ OR messageparams LIKE ‘%2089078%’ OR file LIKE ‘%2089078%’ OR link LIKE ‘%2089078%’ OR object_type LIKE ‘%2089078%’ OR object_id LIKE ‘%2089078%’)
LIMIT 50
‘’’
you can find it by only using object_id

Where 2089078 was my file number equal to object_id.

though I found the file and it was in my case a text file ( gallery.cnf ) in my root / folder. editing it created more errors.

I then renamed to .txt and resulted in error

‘’’
Info

files_versions

2023-01-06T20:30:12+0100

Expire: /gallery.txt.v1673032424


Info

files_versions

2023-01-06T20:30:12+0100

Mark to expire /gallery.txt next version should be 1673032376 or smaller. (prevTimestamp: 1673032436; step: 60
‘’’

but the error about not been reset remains.

I downloaded the file and deleted it from the cloud which seems to solve the issue but when I upload to the cloud root / folder the log returns.

I have also tried to create a new file which should give a new file_id in my assumption but instead after renaming to old-file-name gallery.cnf the error also returned. :face_with_spiral_eyes:

so my only solution (for now) is to not use this file anymore as I am not really using it and it is for me not worth is to investigate further.

I hope someone can help you out any futher in this case.

keep in mind when working with the files the server needs time to react best is to eigther run cron manually or wait for it to run. before reloading and inspecting logs.

remember if you care about your data you should keep a good backup that has been tested.

well above post did not removed the log and as I am impatient I went ahead and removed all links to the no longer existing file in mysql tables

oc_filecache
oc_files_antivirus
oc_text_sessions
oc_text_steps

I have not seen the log occure in the last 30 minutes (6 cronjobs)

this is ofcource not the way it should be solved. And I have no idea how to replicate such a file.

Or, thanks!
I used you command and found corresponding file in the Notes folder.

select * from `oc_activity` WHERE object_id LIKE '%10534193%' limit 0,25; 

I thought that problem was in a , in the file name, so I do rename file from blabla,.md to blabla.md, but this doesn’t help.
I delete the file - it was moved to the trashbin and restore it, this does not help ether.
Only complete removal (move to the trashbin) works… IDK why. If you remove by NC tools via Trashbin, seems you do not need to update any tables at all.

UPDATE:
I was wrong. Removal and even trashbin cleanup will not solve the issue with entries in logs, seems I do need to cleanup DB Manually as you mentioned.