Recommended cloning method for Nextcloud Server Environment Setup

hi
As many people i am tring to setup non-deocker NC server environment through Git and had seen that the documentation said that git clone .. can be used to get source files.
However i would to know if instead of cloning all branches (very bigger …about 4 G) i can use git clone -b to get specific version’s source file

Thanks

This is a pure git question, but I will try to answer at best.

In general, with git clone -b stable26 .. all you will change is that the stable26 branch will get checked out instead of the upstream HEAD which is master). You still have the 4GB download.

In general, there are 3 options about this topic:

  1. Just do a git clone
  2. Do a git clone --depth 1
  3. Do a git clone --filter blob:none (if your git is recent enough) (see also my comment on fetching)

The first option will clone the complete tree and takes GBs to store as you might know by now.

Option No. 2 is throwing away the history. That means your local repository looks as if only one commit was made that contains all the current state of the repo. It blocks you at least partially from fetching/merging with other branches as you threw away some information (your history). This option has the least data transfer involved of all options.

In general, the git manual suggests for recent versions of git to avoid this option except for rare cases. These cases are if you are not reusing the repo. An example would be the CI/CD pipeline where you will clone the server just once and throw it the clone away after the tests have been run. No commits or other git actions are taken there. In this CI/CD case, a depth of 1 (or 2 in some cases) might be a good idea.

The last option 3 was introduced to git later. What it does is to download some sort of history but without the actual file contents. So, git knows the complete history and the names of all files. As soon as you need to access one of the files, they need to be downloaded (and will be stored locally as well). If your clone is rather old and you need to update, you can add the same filter to the fetch command.

As git knows all of your history, you can use it normally. From time to time (if you go back in time), data needs to be fetched from the server.

The storage saving will be significant, while the overall speed will be slower due to the fact that the complete history needs to be visited and packed. I did some testing in the past that might at least give you an idea of the proportions: Enhance the unshallow functionality · Issue #130 · juliushaertl/nextcloud-docker-dev · GitHub

I hope, this helps you with your question. If not, feel free to ask!

1 Like