New Client with Virtual Filesystem cannot sync huge folders

I have setup a new computer to test the virtual file system because migrating my existing NC installation on my desktop was causing other problems so I wanted to test it from scratch.
On my server I have an folder “Photos” which has about 300 GB and almost 60.000 Files sorted by Year.
When I now connect the new computer to my NC server, it starts scanning all the folders. The Photos folder cannot be scanned completey, everytime it pops up with a new error message, sometimes with an “error” with a file in 2011, sometimes with an file in 2017, never the same file.

Then it starts scanning again from scratch so I am in a loop.

I use NC21.0.1 and Win10 Pre x64 with Desktop Client 3.2.0
Are there any known limitations for the new file system feature?

Hi, this sounds like a bug. Please open a new issue for this in the desktop client repository after making sure that no such bug report already exists. GitHub - nextcloud/desktop: 💻 Desktop sync client for Nextcloud
Afterwards, please post the link of the issue here that covers this bug, either way.

Seem to be a known bug:

I just run the tests now on 3 computers having the 3.2.0 client installed as 32bit and 64bit version:
The syncronization never finishes when enabling Virtual Drive Support. When disabling it, it goes through all the folder, and finally tells me: Syncronized.

So it must be a bug of either the VFS feature or the 3.2 Client.

My server error logfile shows:
[Mon Apr 19 07:19:21.377363 2021] [proxy_fcgi:error] [pid 12740:tid 140270704207616] (70007)The timeout specified has expired: [client 192.168.10.125:64823] AH01075: Error dispatching request to : (polling)
[Mon Apr 19 07:19:23.402128 2021] [proxy_fcgi:error] [pid 12740:tid 140270720993024] (70007)The timeout specified has expired: [client 192.168.10.125:64816] AH01075: Error dispatching request to : (polling)
[Mon Apr 19 07:19:24.374095 2021] [proxy_fcgi:error] [pid 12740:tid 140270695814912] (70007)The timeout specified has expired: [client 192.168.10.125:64825] AH01075: Error dispatching request to : (polling)
[Mon Apr 19 07:19:29.714338 2021] [proxy_fcgi:error] [pid 12740:tid 140270729385728] (70007)The timeout specified has expired: [client 192.168.10.125:64827] AH01075: Error dispatching request to : (polling)

Is there any known bug that a folder with 60.000 files in several subfoldes will cause Nextcloud to abort the sync? It seems that the client is running into a timeout, because in the client logfile I can find:

2021-04-19 14:44:40:382 [ debug nextcloud.sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT lastTryEtag, lastTryModtime, retrycount, errorstring, lastTryTime, ignoreDuration, renameTarget, errorCategory, requestId FROM blacklist WHERE path=?1 COLLATE NOCASE"
2021-04-19 14:44:40:382 [ info sync.discovery ]:	STARTING "FOTOS/2019/2019_08_09_Silas_Jonas" OCC::ProcessDirectoryJob::NormalQuery "FOTOS/2019/2019_08_09_Silas_Jonas" OCC::ProcessDirectoryJob::ParentDontExist
2021-04-19 14:44:40:382 [ info nextcloud.sync.accessmanager ]:	6 "PROPFIND" "https://servername/nextcloud/remote.php/dav/files/testuser/FOTOS/2019/2019_08_09_Silas_Jonas" has X-Request-ID "9a4f07cb-7f19-43df-9b09-784741fca4da"
2021-04-19 14:44:40:382 [ debug nextcloud.sync.cookiejar ]	[ OCC::CookieJar::cookiesForUrl ]:	QUrl("https://servername/nextcloud/remote.php/dav/files/testuser/FOTOS/2019/2019_08_09_Silas_Jonas") requests: (QNetworkCookie("nc_sameSiteCookielax=true; secure; HttpOnly; expires=Fri, 31-Dec-2100 23:59:59 GMT; domain=servername; path=/nextcloud"), QNetworkCookie("nc_sameSiteCookiestrict=true; secure; HttpOnly; expires=Fri, 31-Dec-2100 23:59:59 GMT; domain=servername; path=/nextcloud"), QNetworkCookie("oc_sessionPassphrase=K%2BWJoJAlcDrs1EUnolQCSTHYaSjx33OPBH9vC7YjFaGaVXSmSsq5gCnHbxdNQZYl9dCdjx5Qei6IVRM5i11ff7Okkb2QirI4TS1XXtjq%2Fz8QZeiF79fodEbMCyji9YHw; secure; HttpOnly; domain=servername; path=/nextcloud"), QNetworkCookie("ocut4lyy62j6=uu3tcdulfj1l82smb5k8qa0fgp; secure; HttpOnly; domain=servername; path=/nextcloud"))
2021-04-19 14:44:40:382 [ info nextcloud.sync.networkjob ]:	OCC::LsColJob created for "https://servername/nextcloud" + "/FOTOS/2019/2019_08_09_Silas_Jonas" "OCC::DiscoverySingleDirectoryJob"
2021-04-19 14:44:40:616 [ warning nextcloud.sync.networkjob ]:	Network job timeout QUrl("https://servername/nextcloud/remote.php/dav/files/testuser/FOTOS/100MEDIA")
2021-04-19 14:44:40:616 [ info nextcloud.sync.credentials.webflow ]:	request finished
2021-04-19 14:44:40:616 [ warning nextcloud.sync.networkjob ]:	QNetworkReply::OperationCanceledError "Connection timed out" QVariant(Invalid)
2021-04-19 14:44:40:616 [ warning nextcloud.sync.credentials.webflow ]:	QNetworkReply::OperationCanceledError
2021-04-19 14:44:40:616 [ warning nextcloud.sync.credentials.webflow ]:	"Operation canceled"

I changed in the following files the following parameters which sometimes seem to help a little bit, can maybe someone please test them, too?:

Add the following line into %appdata%\Nextcloud\nextcloud.cfg

chunkSize=268435456
timeout=600

On the Nextcloud Server modify the following parameters:

/etc/php/7.4/fpm/php.ini
post_max_size = 40M
upload_max_filesize = 40M
max_execution_time = 300
max_input_time = 600

/etc/php/7.4/apache2/php.ini
post_max_size = 40M
upload_max_filesize = 40M
max_execution_time = 300
max_input_time = 600

/etc/apache2/apache2.conf
Timeout 600
ProxyTimeout 600

Restart the Apache Webserver and the php service. Now restart the Nextcloud Desktop Client and see if the timeout is gone.

In my case the timeout is not there anymore, but the client now stucks at “Reconcilling changes” for hours:
grafik

12 hours later it stucks at
grafik

In the local logfile I did not get any error message.

Had the same problem yesterday with a 21.0.1 installation and 3.2.2 client - in total 60.000 files in 20 mainfolder (photos sorted by year) cannot be syncronized. Workaround was to select only maybe 3 or 4 mainfolder, do the local sync, when done add the next folders. When selecting the mainfolder with all files and subfolders the client will abort. So it could be a problem with an internal buffer overrun or whatever - the amount of files seems to be too huge - no matter if I use VFS or not.

Because it seems that no one knows the problem or knows how to test it - here is a windows cmd script which you can use to create huge folders easily. Jusst create a folder D:\TEMP, and place a small file named bild.jpg in it. Then you specify how many folder below D:\TEMP should be created, and how many subfoldes each should have. Last you specify how many copyied of the pictures every subfolder should have. Now you can create within a few minutes a folder with 100.000 items and lots of folders which can be used for testing.

@echo off
set Mainfolder=20
set subfolder=50
set count_files=100
set template_file=bild.jpg
set root_folder=D:\TEMP

REM Create Mainfolder
FOR /L %%i IN (1,1,%MAINFOLDER%) DO (
  cd /d %root_folder%
  mkdir Mainfolder_%%i
  cd Mainfolder_%%i

REM Create Subfolder in Current Mainfolder
	FOR /L %%k IN (1,1,%subfolder%) DO (
	mkdir Subfolder_%%k
	cd /d Subfolder_%%k
	
		REM Create Files in Current Subfolder
		FOR /L %%l IN (1,1,%count_files%) DO (
		copy /Y %root_folder%\%template_file% .\%%l_%template_file% >NUL
		)
	echo Creating Files in Mainfolder_%%i\Subfolder_%%k done		
	cd /d %root_folder%\Mainfolder_%%i	
	)  
cd /d %ROOT_folder%
)

With that script I am able to reproduce several errors during upload and download.

2 Likes