NextcloudPi Update from v1.12.0 to v1.12.7 doesn't work

Trying to update nextcloudpi from v1.12.0 to v1.12.7 doesn’t work.
Following the output when I try to update - anyone faced this problem already?

Thanks for the help!

sudo ncp-update
`Downloading updates
Performing updates
Running nc-autoupdate-nc
automatic Nextcloud updates enabled
Config value squareSizes for app previewgenerator set to 32
Config value widthSizes for app previewgenerator set to 128 256 512
Config value heightSizes for app previewgenerator set to 128 256
System config value jpeg_quality set to string 60
Running unattended-upgrades
Unattended upgrades active: yes (autoreboot true)
–2019-05-26 10:05:15-- (link to packages sury org deleted)
Resolving packages.sury.org (packages.sury.org)… 104.31.95.169, 104.31.94.169, 2606:4700:30::681f:5ea9, …
Connecting to packages.sury.org (packages.sury.org)|104.31.95.169|:443… connec ted.
HTTP request sent, awaiting response… 200 OK
Length: 1769 (1.7K) [application/octet-stream]
Saving to: ‘/etc/apt/trusted.gpg.d/php.gpg’

/etc/apt/trusted.gp 100%[===================>] 1.73K --.-KB/s in 0s

2019-05-26 10:05:16 (4.96 MB/s) - ‘/etc/apt/trusted.gpg.d/php gpg’ saved [1769/1 769]

Running nc-scan-auto
/usr/local/bin/ncp/CONFIG/nc-scan-auto.sh: line 24: npc/files/Documents: divisio n by 0 (error token is “files/Documents”)
`

System information

NextCloudPi diagnostics

NextCloudPi version  v1.12.0
NextCloudPi image    NextCloudPi_01-14-19
distribution         Raspbian GNU/Linux 9 \n \l
automount            yes
USB devices          sda
datadir              /media/USBdrive/ncdata
data in SD           no
data filesystem      btrfs
data disk usage      92G/932G
rootfs usage         2.2G/30G
swapfile             /var/swap
dbdir                /var/lib/mysql
Nextcloud check      ok
Nextcloud version    15.0.6.1
HTTPD service        up
PHP service          up
MariaDB service      up
Redis service        up
Postfix service      up
internet check       ok
port check 80        open
port check 443       open
IP                   ***REMOVED SENSITIVE VALUE***
gateway              ***REMOVED SENSITIVE VALUE***
interface            eth0
certificates         ***REMOVED SENSITIVE VALUE***
NAT loopback         no
uptime               4min

Nextcloud configuration

{
    "system": {
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "trusted_domains": {
            "0": "localhost",
            "5": "nextcloudpi.local",
            "7": "nextcloudpi",
            "8": "nextcloudpi.lan",
            "1": "***",
            "4": "noip.ddns",
            "11": "****",
            "20": "****",
            "2": "noip.ddns",
            "3": "noip.ddns",
            "21": "***"
        },
        "datadirectory": "***REMOVED SENSITIVE VALUE***",
        "dbtype": "mysql",
        "version": "15.0.6.1",
        "overwrite.cli.url": "https:\/\/noip.ddns\/",
        "dbname": "***REMOVED SENSITIVE VALUE***",
        "dbhost": "***REMOVED SENSITIVE VALUE***",
        "dbport": "",
        "dbtableprefix": "oc_",
        "mysql.utf8mb4": true,
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "installed": true,
        "instanceid": "***REMOVED SENSITIVE VALUE***",
        "memcache.local": "\\OC\\Memcache\\Redis",
        "memcache.locking": "\\OC\\Memcache\\Redis",
        "redis": {
            "host": "***REMOVED SENSITIVE VALUE***",
            "port": 0,
            "timeout": 0,
            "password": "***REMOVED SENSITIVE VALUE***"
        },
        "tempdirectory": "\/media\/USBdrive\/ncdata\/tmp",
        "mail_smtpmode": "sendmail",
        "mail_smtpauthtype": "LOGIN",
        "mail_from_address": "***REMOVED SENSITIVE VALUE***",
        "mail_domain": "***REMOVED SENSITIVE VALUE***",
        "overwriteprotocol": "https",
        "maintenance": false,
        "logfile": "\/media\/USBdrive\/ncdata\/nextcloud log",
        "loglevel": "2",
        "log_type": "file",
        "htaccess.RewriteBase": "\/",
        "jpeg_quality": "60"
    }
}

HTTPd logs

[Sun May 26 06:25:02.760915 2019] [ssl:warn] [pid 968:tid 1996349680] AH01909: localhost:4443:0 server certificate does NOT include an ID which matches the server name
[Sun May 26 06:25:03.000512 2019] [mpm_event:notice] [pid 968:tid 1996349680] AH00489: Apache/2.4.25 (Raspbian) OpenSSL/1.0.2r configured -- resuming normal operations
[Sun May 26 06:25:03.000630 2019] [core:notice] [pid 968:tid 1996349680] AH00094: Command line: '/usr/sbin/apache2'
[Sun May 26 09:09:45.600708 2019] [proxy_fcgi:error] [pid 27935:tid 1792201776] [client 192.168.178.20:50263] AH01071: Got error 'PHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 238\nPHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 244\n'
[Sun May 26 09:16:49.491704 2019] [proxy_fcgi:error] [pid 27934:tid 1741837360] [client 192.168.178.20:50292] AH01071: Got error 'PHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 238\nPHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 244\n'
[Sun May 26 09:21:35.511009 2019] [proxy_fcgi:error] [pid 27934:tid 1725035568] [client 192.168.178.20:50381] AH01071: Got error 'PHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 238\nPHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 244\n'
[Sun May 26 09:51:52.265987 2019] [proxy_fcgi:error] [pid 27935:tid 1741837360] [client 192.168.178.20:50658] AH01071: Got error 'PHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 238\nPHP message: PHP Notice:  Undefined index: app in /var/www/ncp-web/index.php on line 244\n'
[Sun May 26 09:53:26.835799 2019] [mpm_event:notice] [pid 968:tid 1996349680] AH00493: SIGUSR1 received.  Doing graceful restart
[Sun May 26 09:53:26.888521 2019] [ssl:warn] [pid 968:tid 1996349680] AH01909: localhost:4443:0 server certificate does NOT include an ID which matches the server name
[Sun May 26 09:53:27.000465 2019] [mpm_event:notice] [pid 968:tid 1996349680] AH00489: Apache/2.4.25 (Raspbian) OpenSSL/1.0.2r configured -- resuming normal operations
[Sun May 26 09:53:27.000582 2019] [core:notice] [pid 968:tid 1996349680] AH00094: Command line: '/usr/sbin/apache2'
[Sun May 26 09:57:34.621550 2019] [proxy_fcgi:error] [pid 1879:tid 1683031088] [client 192.168.178.20:50847] AH01067: Failed to read FastCGI header
[Sun May 26 09:57:34.895552 2019] [mpm_event:notice] [pid 968:tid 1996349680] AH00491: caught SIGTERM, shutting down
[Sun May 26 09:57:47.988887 2019] [ssl:warn] [pid 644:tid 1995436272] AH01909: localhost:4443:0 server certificate does NOT include an ID which matches the server name
[Sun May 26 09:57:49.046789 2019] [ssl:warn] [pid 1118:tid 1995436272] AH01909: localhost:4443:0 server certificate does NOT include an ID which matches the server name
[Sun May 26 09:57:50.003659 2019] [mpm_event:notice] [pid 1118:tid 1995436272] AH00489: Apache/2.4.25 (Raspbian) OpenSSL/1.0.2r configured -- resuming normal operations
[Sun May 26 09:57:50.003844 2019] [core:notice] [pid 1118:tid 1995436272] AH00094: Command line: '/usr/sbin/apache2'

Database logs

2019-05-26  9:57:37 1969828672 [Note] /usr/sbin/mysqld: Shutdown complete

2019-05-26  9:58:38 1989164848 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2019-05-26  9:58:38 1989164848 [Note] InnoDB: The InnoDB memory heap is disabled
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2019-05-26  9:58:38 1989164848 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Compressed tables use zlib 1.2.8
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Using Linux native AIO
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Using generic crc32 instructions
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Initializing buffer pool, size = 370.0M
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Completed initialization of buffer pool
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Highest supported file format is Barracuda.
2019-05-26  9:58:38 1989164848 [Note] InnoDB: 128 rollback segment(s) are active.
2019-05-26  9:58:38 1989164848 [Note] InnoDB: Waiting for purge to start
2019-05-26  9:58:38 1989164848 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.41-84.1 started; log sequence number 614761519
2019-05-26  9:58:38 1111487296 [Note] InnoDB: Dumping buffer pool(s) not yet started
2019-05-26  9:58:39 1989164848 [Note] Plugin 'FEEDBACK' is disabled.
2019-05-26  9:58:39 1989164848 [Note] Server socket created on IP: '127.0.0.1'.
2019-05-26  9:58:39 1989164848 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.37-MariaDB-0+deb9u1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Raspbian 9.0

Nextcloud logs

{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Move .step file of updater to backup location","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Fix potential broken mount points","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::info: Repair info: No mounts updated","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Repair invalid paths in file cache","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Add log rotate job","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Clear frontend caches","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:28+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::info: Repair info: Image cache cleared","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:30+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::info: Repair info: SCSS cache cleared","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::info: Repair info: JS cache cleared","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Clear every generated avatar on major updates","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Add preview background cleanup job","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Queue a one-time job to cleanup old backups of the updater","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Repair pending cron jobs","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::info: Repair info: No need to repair pending cron jobs.","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Repair::step: Repair step: Extract the vcard uid and store it in the db","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:31+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Updater::startCheckCodeIntegrity: Starting code integrity check...","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:55+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Updater::finishedCheckCodeIntegrity: Finished code integrity check","userAgent":"--","version":"15.0.5.3"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:55+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Updater::updateEnd: Update successful","userAgent":"--","version":"15.0.6.1"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:55+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Updater::maintenanceDisabled: Turned off maintenance mode","userAgent":"--","version":"15.0.6.1"}
{"reqId":"fZxdRjkOsuHgqwRFXrGR","level":1,"time":"2019-05-26T08:14:55+00:00","remoteAddr":"","user":"--","app":"updater","method":"","url":"--","message":"\\OC\\Updater::resetLogLevel: Reset log level to Warning(2)","userAgent":"--","version":"15.0.6.1"}

I think you put something invalid in “SCANINTERVAL” in ncp-web -> nc-scan-auto.

1 Like

instead of an interval there was the path inserted - don’t know how that happend.
Thanks for the fast reply!