Here is a worst case example. One of my users logs in to their account at a public place, perhaps at a library. Then they accidentally forget to log out. A malicious person then deletes all of their files and deletes them from the “Deleted files” folder. At home, their computer is running, and before the user gets a chance, the computer syncs, therefore removing all files.
My solution was to mount my data directory using sshfs to my Debian desktop at home. Then I could run rsync -a /source/ /target/ in a cron job to back up all user files regularly. I do not want to run this backup job on the server directly because of disk space. I do not have port forwarding enabled for SSH, so it is only on my LAN. I can mount the directory, but I of course do not have the needed permissions. I do not want to change my directory permissions on the server. On the server, I tried adding my ssh user to the root group, but in Ubuntu Snappy Core /etc/group is mounted as read only.
Another possibility, is there any way to add an additional “Deletion” layer? For example, files stay in “Deleted Files” for thirty days, but then after deletion from there they actually remain on the server for 15 more days.
The other possibility would be to mount each user with Webdav and then run rsync, but that is certainly not ideal.
I appreciate any ideas or input you all may have for a data backup solution for an Ubuntu Snappy Core server install.
I want to express how great it is to have my data on my computer (not on someone else’s server). I very much appreciate this project.
I use rsnapshot on a remote machine. It connects via ssh and then backups all files with rsync (and keeps different snapshot versions). There are other backup script that couple rsync and ssh/scp, that’s perhaps a bit more reliable than sshfs.
Do you by chance use this utility with Ubuntu Snappy Core? I am basically unable to grant myself the proper ssh permissions to mount and backup the data. Thanks
Although rsync is not available on Ubuntu Snappy Core, of course, cp is available. So I plan to set up a systemd service to regularly back up user data using cp -au /source/ /target/ The -u flag is for update so it does not copy all data over every time.
this is not a “solution,” i think, but rather a partly-functional expensive workaround. what happens if your user creates this mission-critical file and “loses” it in the way you describe just in between two such snapshots?
if your users cannot be educated to act in a responsible way and you need a technical solution for that, you should probably - as a very first step - clearly define the risks to avoid and then look for a technical solution to handle this. if a “regular” backup (usually once or twice a day) is not enough you could try to operate at the filesystem level; eg take a snapshot (with btrfs) every second or use a filesystem that does not “forget” deleted data like nilfs. of course, this creates extremly high system and filesystem-load and there ist still the question of what happens to files that are “opened” over the network and not completely written to disk (important in nc) when the snapshot is taken.
maybe some education and regular backups could do the job? rsnapshot is a very good tool that can do hourly backups with its standard config; if you modify it you could probably go down to “minutely.” (if you are at home the interval is limited by the time it takes for one snapshot to finish. if one snapshot (=rsync over ssh) takes 5 minutes and you try to take one every minute rsnapshot will fail. but in the end your number of snapshots is only limited by your network-speed and diskspace.)
I think a normal backup as I had mentioned is good enough for now, maybe hourly or every few hours. This is very small scale (helping out a couple of friends). Unfortunately, because I am using Ubuntu Snappy Core, I do not believe rsnapshot is an option. There is no snap package of this. Also, there would be a permissions issue. You can’t create a root user, so ssh/rsnapshot as root (for root directories) is not possible. I may switch to a 64 bit (traditional/normal) server OS (for my Raspberry Pi 3) when one becomes available that I want to use (currently only OpenSUSE has 64 bit for RPi). (There is a 2GB file size limit for the 32 bit RPi OSs.) Then I can run a more suitable backup option as you describe. Fortunately, I do have knowledgeable users. I just know that we all can make mistakes sometimes, especially when tired or distracted. I appreciate the suggestions.
Maybe I misunderstood the snap-concept, then please enlighen me:
What I did is an apt-get install for rsync and rsnapshot (and more). Seems to work so far.
(Nextcloud box on RPi2)
Ubuntu Snappy Core does not use .deb packages or apt/apt-get, only snap packages. So this is why I am limited. Here is a good overview, https://askubuntu.com/questions/605066/what-is-snappy-ubuntu-core/605087#answer-605087. With that said, I have had zero issues since setting my system up in early August. The server has worked perfectly.
Then it looks like my nc-box is not the Snappy Core you refer to. I can definitely run apt-get Installed according to cookbook.