Where is the db created?

Hi! I’ve downloaded the Nextcloud server code from github but can’t really figure out how the database and its tables are created. Could someone point me in the right direction?

Maybe you like this installation guides:

Apache2, MariaDB, Nextcloud on Debian 11
Apache2, MariaDB, Nextcloud on Ubuntu 22.04 LTS

There you find also infos for installation and first configuration of MariaDB. The Nextcloud installer then use the database nextcloud user / password to create e.g. tables.

Have you installed Apache2, PHP and MariaDB?

Hi! Sorry, I was probably unclear of what I’m trying to do. I’m debugging a potential issue in nextcloud. So I would like to review the tables creation in the installation process.

I’ve tried grepping for “create table” and searching various install files but am unable to locate any schema describing the tables that’s created on install. I have manually connected to my db and looked at the tables so I can see the potential issue but I would like to dig a bit deeper before making some sort of report or pull request

What is the table name you are suspecting the problem to be located?

Most tables are created by apps’ migrations. You will not find any CREATE TABLE statements. These are abstracted by Doctrine DBAL. The doctrine is used by the migrations of the core or the app responsible for the table.

I have a use case where I sync a file tree with rather long paths (both directory names and filenames are long). I was getting errors as I was deleting these synced files. Looking closer at the problem, I noticed that the table oc_files_trash filename and path columns where way to small in size for me:

MariaDB [nextcloud]> describe oc_files_trash;
| Field     | Type         | Null | Key | Default | Extra          |
| auto_id   | bigint(20)   | NO   | PRI | NULL    | auto_increment |
| id        | varchar(250) | NO   | MUL |         |                |
| user      | varchar(64)  | NO   | MUL |         |                |
| timestamp | varchar(12)  | NO   | MUL |         |                |
| location  | varchar(512) | NO   |     |         |                |
| type      | varchar(4)   | YES  |     | NULL    |                |
| mime      | varchar(255) | YES  |     | NULL    |                |
7 rows in set (0.003 sec)

It seems the id column holds the filename (250 bytes) and the location column holds the path (512 bytes). My instance kept failing the delete sync since filenames and/or paths where too long to be stored in this table. As I turned off the app “Deleted Files” in the Apps menu the deleted sync problems stopped, presumably because files no longer ended up in the oc_files_trash as part of the deletion process.

The oc_filecache table (which I assume holds the index of all syned files) has a path column which hold 4000 characters which is big enough to allow my long paths, hence file syncing succeeded when information was written to this table.

I think that the id and location limits in the oc_files_trash table needs to be increased in order to accommodate longer paths/filenames. The exact column values might be a bit tricky to get right though since both tables stores path/filenames in a relative location to the install dir + the user home dir, so it might not be a simple matter of extending the column sizes to 4096 (Linux’s defined PATH_MAX) as the underlying file access path in some situations could exceed the allowed path length. However I’m not sure since I haven’t studied the code. Depending on how this is written, the column length might need to be offset against the install base path length and also take into account the username length somehow.

It might also be beneficial to investigate other Unix flavors’ PATH_MAX and modify the db installer with allowed max values for other OS’es than Linux. Also, I think it would be beneficial to synergize the max paths in the tables discussed here, oc_files_trash and oc_filecache.

Hello. Just a quick answer: the DB table is created here:

@protohuf did you succeed in the meantime to find the appropriate information? If you need more concrete information/hints from me, please help me to get your problem explicitly.

Just my 50 ct: Keep in mind that the server can run also on a windows OS and most windows file systems are much more restricted wrt file name length. I had this issue already with a git repository. I do not know what the motivations for these limits were.

Apart from that, I suggest that you raise this issue within the issues as this is an implementation detail as far as I see it.

Hi! Sorry for the late reply. Yes, thanks for the reply! This is exactly what I needed!

1 Like