Warning regarding need Bigint after 17.01 to 17.02 upgrade

I just upgraded (successfully!) from 17.0.1 to 17.0.2. Now I get the BigInt warning below. Looks like I have to run “occ db:convert-filecache-bigint” to convert the database. The instructions seem to refer to this being relevant to upgrades from NextcLoud 12 to 13. I began using Nextcloud at V16 so am a little confused why this would be necessary now?

Some columns in the database are missing a conversion to big int. Due to the fact that changing column types on big tables could take some time they were not changed automatically. By running ‘occ db:convert-filecache-bigint’ those pending changes could be applied manually. This operation needs to be made while the instance is offline. For further details read the documentation page about this.

  • mounts.storage_id
  • mounts.root_id
  • mounts.mount_idstrong text

Run occ db:convert-filecache-bigint :wink:

why this would be necessary now?

1 Like

You have to login in your ssh and perform this command:

sudo -u www-data php occ db:convert-filecache-bigint

Than the message should be cleared. And you have to stop the Nextcloud instance when you perform this command.

JanDragon

Thanks. Yes, as I stated in my OP I know that I need to run that command - which I have successfully run. I guess my question is why, since most of the references to needing to do this are for much older versions (12 to 13) not for a minor version upgrade from 17.0.1 to 17.02. What changed in the latest upgrade or was this something that should have happened previously but for some reason was missed?

From time to time other columns are changed to BigInt.

Ok. Makes sense. Probably would be good to proactively warn people in the upgrade documentation that this will likely be necessary instead of people discovering after the fact. Make it seem like a routine action rather than seem like a potential error…

Better still, why not make it an optional step as part of a standard upgrade?

That’s explained at the docs. If a table contains a big amount of data changing the column type will take a while. If this timeouts or anything else this could damage the database. Run it from cli if possible. It’s optional for most people. It’s required for big instances.

The installer could check whether there is a table with a lot of data. That’s what computers are good for, isn’t it?

As open source project nextcloud is accepting pull requests. Feel free to submit your patch with those run expensive migrations on smaller instances automatically changes. Thanks :+1:

Sorry, it appears that my babelfish is not well. It can’t make any sense of “those run expensive migrations on smaller instances automatically changes”

Hi,

I am using the nextcloud docker image.
I fixed it by running:

docker exec --user www-data nextcloud-container-name php occ db:convert-filecache-bigint

1 Like

I rephrased “The installer could check whether there is a table with a lot of data. That’s what computers are good for, isn’t it?” to “run expensive migrations on smaller instances automatically”.

That’s actually what you are suggesting. Check if table X has a certain amount of data, compare it with the server configuration (e.g. script execution time) and decide to run the migration or show a warning. We suggest to shutdown the webserver while running the migration that is probably a technical challenge if you’re using the web based updater.

Nextcloud is open source. Feel free to dig into the code and submit a patch. I’m happy to look into.

To be fair, while it does error until you run the occ command, it also does continue to work.

What bothers me is that the in my case, the command warned me that it could take hours, and then completed in a fraction of a second. Really?

    $ sudo -u www-data php occ db:convert-filecache-bigint
    Following columns will be updated:

    * mounts.storage_id
    * mounts.root_id
    * mounts.mount_id

    This can take up to hours, depending on the number of files in your instance!
    Continue with the conversion (y/n)? [n] y
    $ sudo -u www-data php occ files:scan --all
    Starting scan for user 1 out of  ......
    +---------+--------+--------------+
    | Folders | Files  | Elapsed time |
    +---------+--------+--------------+
    | 6443    | 142811 | 00:01:12     |
    +---------+--------+--------------+
    $

I guess 1.2TB of data is not a lot?

I guess 1.2TB of data is not a lot?

No.

What bothers me is that the in my case, the command warned me that it could take hours, and then completed in a fraction of a second.

It’s hard to predict the execution time. Feel free to make some research about this topic and submit a patch :wink:

1 Like

Hi

I am also having the same issue and running an NCP docker image. Where exactly did you run this command?

Thanks

Hi,

I have never looked at ncp, but if it’s just a nc for arm, just run this on your pi’s host OS, as root or any other user that has the permissions to acces the docker socket.

kind regards

P. S. you have to replace nextcloud-container-name with your actual container name xD

Hello,
If anyone knows how to do this with an sql command in phpmyadmin, I am interested:

Thanks in advance :kissing_heart::hugs:

Hello,
I run nextcloud in a shared webhosting environment and have no ssh access to the server.
Is there a way to trigger the conversion from php or SQL?

edit:
I just found the occweb app, but this seems to be not compatible with 17.0.2. It shows a warning.
Are there other ways to get the columns converted?

edit2:
I was brave and used occweb :wink:
Worked like a charm. Warnings are gone.

Best,
Lars

2 Likes

Hey Guys,

Where do you run this command? I am using Unraid and i ran it under Unraid and nothing happens. Furthermore, I actually tried to run it on the app console and i get this:
sudo: unknown user: www-data
sudo: unable to initialize policy plugin
Any help would be amazing.

Thank you guys.