Warning regarding need Bigint after 17.01 to 17.02 upgrade

Better still, why not make it an optional step as part of a standard upgrade?

That’s explained at the docs. If a table contains a big amount of data changing the column type will take a while. If this timeouts or anything else this could damage the database. Run it from cli if possible. It’s optional for most people. It’s required for big instances.

The installer could check whether there is a table with a lot of data. That’s what computers are good for, isn’t it?

As open source project nextcloud is accepting pull requests. Feel free to submit your patch with those run expensive migrations on smaller instances automatically changes. Thanks :+1:

Sorry, it appears that my babelfish is not well. It can’t make any sense of “those run expensive migrations on smaller instances automatically changes”

Hi,

I am using the nextcloud docker image.
I fixed it by running:

docker exec --user www-data nextcloud-container-name php occ db:convert-filecache-bigint

1 Like

I rephrased “The installer could check whether there is a table with a lot of data. That’s what computers are good for, isn’t it?” to “run expensive migrations on smaller instances automatically”.

That’s actually what you are suggesting. Check if table X has a certain amount of data, compare it with the server configuration (e.g. script execution time) and decide to run the migration or show a warning. We suggest to shutdown the webserver while running the migration that is probably a technical challenge if you’re using the web based updater.

Nextcloud is open source. Feel free to dig into the code and submit a patch. I’m happy to look into.

To be fair, while it does error until you run the occ command, it also does continue to work.

What bothers me is that the in my case, the command warned me that it could take hours, and then completed in a fraction of a second. Really?

    $ sudo -u www-data php occ db:convert-filecache-bigint
    Following columns will be updated:

    * mounts.storage_id
    * mounts.root_id
    * mounts.mount_id

    This can take up to hours, depending on the number of files in your instance!
    Continue with the conversion (y/n)? [n] y
    $ sudo -u www-data php occ files:scan --all
    Starting scan for user 1 out of  ......
    +---------+--------+--------------+
    | Folders | Files  | Elapsed time |
    +---------+--------+--------------+
    | 6443    | 142811 | 00:01:12     |
    +---------+--------+--------------+
    $

I guess 1.2TB of data is not a lot?

I guess 1.2TB of data is not a lot?

No.

What bothers me is that the in my case, the command warned me that it could take hours, and then completed in a fraction of a second.

It’s hard to predict the execution time. Feel free to make some research about this topic and submit a patch :wink:

1 Like

Hi

I am also having the same issue and running an NCP docker image. Where exactly did you run this command?

Thanks

Hi,

I have never looked at ncp, but if it’s just a nc for arm, just run this on your pi’s host OS, as root or any other user that has the permissions to acces the docker socket.

kind regards

P. S. you have to replace nextcloud-container-name with your actual container name xD

Hello,
If anyone knows how to do this with an sql command in phpmyadmin, I am interested:

Thanks in advance :kissing_heart::hugs:

Hello,
I run nextcloud in a shared webhosting environment and have no ssh access to the server.
Is there a way to trigger the conversion from php or SQL?

edit:
I just found the occweb app, but this seems to be not compatible with 17.0.2. It shows a warning.
Are there other ways to get the columns converted?

edit2:
I was brave and used occweb :wink:
Worked like a charm. Warnings are gone.

Best,
Lars

2 Likes

Hey Guys,

Where do you run this command? I am using Unraid and i ran it under Unraid and nothing happens. Furthermore, I actually tried to run it on the app console and i get this:
sudo: unknown user: www-data
sudo: unable to initialize policy plugin
Any help would be amazing.

Thank you guys.

why can’t there be an update without having to do some extra stuff :frowning:
i run my NC’s at shared hosting, there is no change of running those commands.
is there any other way of fixing it?

1 Like

I’m seeing the same warning on my instance. I’m running nextcloudpi on raspberrypi. When I enter ‘sudo -u www-data php occ db:convert-filecache-bigint’ I get ‘could not open inout file: occ’
What is the right command for nextcloudpi? And is it enough to put the instance in maintenance mode ?

And how am I going to run this command in CentOS 7? Can someone please help me on having compatible command for CentOS 7 ?

Thanks!

I have the same question do you have a solution?

See JanDragon’s post above. That works on Centos 7.

As I had the same problem - no SSH access - I used the possibility to create a cron job in the backend of my hoster and called the script from there, in my case:
/usr/bin/php /usr/www/users/xxx/cloud.xxx.com/occ db:convert-filecache-bigint --no-interaction

The “–no-interaction is necessary” - otherwise the script would “ask” to continue.

It’s probably also possible to create a temporary php script to execute “occ” from a system() call, maybe like this:

<?php
  $cmd = "/usr/bin/php /usr/www/users/xxx/cloud.xxx.com/occ db:convert-filecache-bigint --no-interaction";
  system($cmd,$return_value);
  ($return_value == 0) or die("returned an error: $cmd");
?>