I might not have been clear, I meant “on the same 'nix server”, not in the same file store of a Nextcloud server. Nextcloud (or any general file synchronization agent) should NEVER be used to synchronize git repo’s because it does not synchronize all client repo’s simultaneously.
For example, if I commit to a local repo in my ~/Nextcloud directory, another developer could be committing to their local repo at about the same time, and there will be a race between our Nextcloud clients to see whose commit gets synchronized to the Nextcloud server first.
The best outcome is that Nextcloud will notify the loser there has been a sync collision (a file was modified locally at the same time it was modified on the server), and that the collision will need to be corrected manually. The worst outcome is that the loser will lose their commit entirely (their local repo will be overwritten by the copy on the server from the winning user), or the server repo will be corrupted with some files from the first user and some from the second user. Uggg! This is NOT what you want to have happen.
Only a git push command should be used to update an upstream server. The upstream repo will be locked and the push will be done in an atomic action. This is the only way to preserve repo integrity!
Here is my setup, note that it requires having ssh login capability to the server.
create a login user, the git repo will be stored in their user directory. You can put more than one git repo in the user directory, but all the repos will have common access (i.e. control is to the user directory, not to the repos in the directory).
create a public/private ssh key to access the git repo with (more correctly, to access the user account that has the git repos), and add the public key to the /home/username/.ssh/authorized_keys file.
You can use a single ssh key and give the private key to each real user who needs access, or better, each real user has their own ssh key and you add all their public keys to the authorized_keys file (this way you can easily remove access from a single user by removing their public key from the authorized_keys file).
- create a bare repo in the user directory. E.g. sudo -u username git init --bare repo_name.git
Now you have two choices, depending on whether you have started writing code yet.
4a. If you have not started development yet, commit a README.md file from the server so the repo has at least one commit. Then from your dev workstation, clone the repo using the ssh protocol. After that you write code, commit locally, and push your commits to the upstream origin repo as you would with any other cloned project.
4b. If you have already started writing code (i.e. if you already have a repo with commits), add an origin or upstream ssh protocol remote to your local repo and push it to the empty repo on the server.
This approach is easy to setup and has worked well for me. If I need something more complex, I would probably use GitHub - unless I really wanted full control and then I would host Gitlab on the server.
I thought I had written a blog post I was going to refer to you, but it seems not. There is a lot of information on the net (google something like “use ssh to host a remote git repo”), but feel free to DM me if you are having trouble.