Client sync issues after mass copy on server disk

Hi there,

I have to sync 650 Gb of files between my workstation and my NAS server. Network speed make this operation terribly long. So I followed a how-to page for mass copy on the Nexcloud server disk :

It was explained to copy directly files from a disk to the /<nextcloud-repo>/<user>/files/ directory and then run the command line sudo -u www-data php console.php files:scan --all

Now I’m facing a lot of synchronization issues between the original disk and the server (11.0.2 (stable)) on Windows 10 client app (Version 2.2.4 (build 2)). I’ve got almost all my files marked as :

Error: Downloaded file is empty, although server sent xMB as size

Can you tell me if I made a mistake in my copy operation ?
Thanks a lot for your help.

@sebsan_999 the tutorial guides how to transfer mass data to a nextcloud server from an external hdd containing the data which is connected with the nextcloud server via a USB cable using rysnc at Ubuntu level. Past the transfer, you need to mount the folder where you transfered the data in on Nextcloud as an external local storage. When this is done, force the rescan. Did you do this?

To sync the files on the PC, the user must use the NextCloud Desktop App and use selective sync to download the data from the server.

In fact, I have directly connected the server disk to my workstation by USB and copied only missing files to the Nexcloud <user>/files/ directory. Then, I replaced the disk in my server, did a fsck, and ran the php console scan command line.

Yes but I have to make a first mass transfert to have my data on the server… And keep it sync with the original on my workstation. Am I missing something ?

@sebsan_999 you need to make the transfer of data to the server and not the workstation. When you have finished as it takes about 2 hours including the scan, then use workstation nextcloud desktop app to do the syncronisation between the data on the workstation and server.

@sebsan_999 It might also be that you did not transfer the files on an NTFS partition on the server…

In my case, data comes from my workstation and I need to keep it entirely sync (650Gb).
It would take several days (and more if I trust the transfert time calculated by the app itself) to re-transfert all of it back to the workstation. I wanted to avoid client app upload 650Gb to the server by the client protocole, or the contrary.

No, I didn’t.

@sebsan_999 ideally you should copy all the 650gb data of your workstation on an external HDD. On your server create a NTFS partition (assuming that you use Windows) that will contain the data that you will transfer. Copy the data on the server using the tutorial, then let the workstation and server sync between themselves. They will create an index of all files and see if there is something different and transfer only that.

I cannot create NTFS partition, my server is a linux server on an ARM plateform.
But this is what I did… I copied all worksation files on the server disk. I didn’t do that with an external disk, but I don’t see the difference. Files are placed where they have to be.
The sync between workstation and server don’t seems to work correctly.
A date/time issue maybe ?

Seems to be the same issue than:

I made a first sync test with few directories.
I already had this message on a file I transfered by Samba. Th only way I had was to move the file on the server by web interface, in a directory that was not sync with the client…

@sebsan_999 I think that this issue relates with the NTFS file structure. From the browser are you able to view the files on the server and open them online? If not can you do a force scan so that the nextcloud database (mysqul or mariadb) is populated on the server.

Do you have smbclient installed?

Workstation troubleshooting

Have you tried uninstalling the desktop app -> reboot (to clear the registry) and reinstall the desktop app? You may keep your existing file structure as is.

Yes, I have complete access to my files from web interface.

No.

I’ll try soon and I keep you aware…

BTW : Thanks a lot for your patience @fab :wink:

@sebsan_999 welcome. Try to always poke me by using @fab, else I would not receive notification of your posts.

smbclient will help a lot.

Once you do the unistall of the desktop client, reboot the server also. It helps too.

@fab is it normal to have only 32bits version of Windows Cli App ?
I can see a 64bits for OSX and Linux… Could it be a problem ?

@sebsan_999 not really, or at least I don’t see there is any issue with 32bit versions running on 64bit. All I can say is that the devs at NextCloud should work on the desktop app as the one available has been ported from OwnCloud and at times it gives me problems too.

In your case, you may have an issue where the date of the files on the server is newer, hence a version mismatch might have happened.

I have used the flow within the tutorial on three new NextCloud installs without an issue. Probably as the workstations did never contain the data in the first place and the users made use of selective sync to download only the needed data. Having said this, I have always used NTFS partitions and mount such at boot using the etc/fstab. Hardware wise, I must be honest, I had bad experience with ARM devices as was trying to convert a WD-MyCloud to a NextCloud Box but have failed all my attempts so far. Hence I purely make use of real servers (Dell PowerEdge 1950s, 2950s and r710)

If possible try to uninstall the desktop app, reboot and reinstall.

I do this to have full control of Ubuntu 16.04 LTS server as Busy Box Linux is annoyingly limited.

@fab
I re-installed the client app and sync in on the way… We will see tomorrow I think.
I am using an Hardkernel Odroid XU arm minimachine with a port of ubuntu 14.04 32bits and a hard drive of 2Tb.
I’m pretty satisfied, i have access to almost everything on linux with good performances for just 7~14 watts of power. :slight_smile:
But processing 650Gb still a long time job for it and its 100Mbit ethernet module.
At this moment, everything looks good… wait and see.

@sebsan_999 Odriod sounds interesting and I was tempted to test PI3 too. I was working to flush the WD-MyCloud 2TB device with Ubuntu 16.04 Core but had too many issues with the armhf, uBoot and uImage which I had no time to focus on and solve. Honestly, I have no experience with uBoot and uImage, so this was a hurdle too.

Wish to overcome this as WD-MyCloud is very appropriate for home use and watts matters to me too, at least for personal use.

Obviously, this is not the case for office use. SoCs cannot compete with the likes of dual quad Xeon processors and the likes of 64Gb and higher of physical ram apart from the swap and bandwidth where all NICs can be bonded, etc. Running costs can be curbed down with bench marking and optimization. I have 1950s running at just 2c per hour circa 200w with no artificial enclosure cooling, hence its pretty cheap and I am working to curb this further using pm-utils and server availability polices.

This article makes a good read over my environment responsibilities :wink:

@fab Sure… It’s not the same world.
For personal use, the power cost becomes more important than process load capacity.
And here, ARM is the king.
Take a look at the Odroid-XU4 card. Just impressive, 2x more powerfull than a Pi3.
I built myself my NAS in DIY mode with an old Odroid-XU and I could compare it with a Pi2. Nothing compares.
Even better than commercials NAS for the cost. And ubuntu distib.
But you have to deal with case building and certain bug resolution.
Nothing’s perfect.

BTW, Sync is still running… have conflicts but nothing abnormal for the moment.
Something weird: I still have 3 files with the message
Error: Downloaded file is empty, although server sent xMB as size
but these files are corrects both sides, and more weird, one of these files was the one who failed the same way in my first test. And during my first test, I didn’t use mass copy yet. It was transfrered by the cli app the “classic” way…

@sebsan_999 try to rename the three files, past sync rename them back

1 Like

@sebsan_999 wow these SoCs keep progressing. there were issues with Snappy Ubuntu if I remember correctly, have these been tarnished