[Solved] NC 18.0.1 - default timeout limit for app download still too tight / override is being ignored!

Support intro

Sorry to hear you’re facing problems :slightly_frowning_face:

help.nextcloud.com is for home/non-enterprise users. If you’re running a business, paid support can be accessed via portal.nextcloud.com where we can ensure your business keeps running smoothly.

In order to help you as quickly as possible, before clicking Create Topic please provide as much of the below as you can. Feel free to use a pastebin service for logs, otherwise either indent short log examples with four spaces:

example

Or for longer, use three backticks above and below the code snippet:

longer
example
here

Some or all of the below information will be requested if it isn’t supplied; for fastest response please provide as much as you can :heart:

Nextcloud version (eg, 12.0.2): 18.0.1
Operating system and version (eg, Ubuntu 17.04): Ubuntu 18.04.4
Apache or nginx version (eg, Apache 2.4.25): Apache 2.4.29
PHP version (eg, 7.1): 7.3

The issue you are facing:
When downloading Community Document Server under NC18.0.0 the default timeout limit was too low (30sec). I changed the timeout limit in Client.php to 300sec and that was working fine for me. (RequestOptions::TIMEOUT => 300,)

Now with NC18.0.1 the default timeout limit obviously was raised. Unfortunately, this is still too low for the average consumer’s internet access point. NC again returns this error message:

cURL error 28: Operation timed out after 120000 milliseconds with 143464900 out of 315849196 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)

So I changed the timeout limit in Client.php to 300sec again but this time it seems that this change is being ignored by NC18.0.1 as it still sticks to the 120sec limit and returns the above shown error message.

Is this the first time you’ve seen this error? (Y/N): N

Steps to replicate it:

  1. see above

The output of your Nextcloud log in Admin > Logging:

PASTE HERE

The output of your config.php file in /path/to/nextcloud (make sure you remove any identifiable information!):

PASTE HERE

The output of your Apache/nginx/system log in /var/log/____:

PASTE HERE
1 Like

What is th solution to this issue? The topic is marked as “solved” but there is no solution visible.

1 Like

My solution was to carry the server to a place with higher bandwidth. While this issue is now solved for me personally I guess it’s still open for other people.

That is not much of a solution though, our server has 10 Gb/s unmetered fiber connection, yet it still gives this error.

It’s a workaround but for a fix we would need to change the code. Check this with the developers on github.

I was able to increase RequestOptionsTimeout to 300 seconds after 90 and 120 failed. It worked and am now creating and editing in browser. Debian and NC 18

Hi everyone,

I came across same issue with NC19 at the time of installing Collabora Online - Built-in CODE Server. The problem is curl’s timeout settings when installing apps, but this timeout is not defined in Client.php (RequestOptions::TIMEOUT variable is not the one to modify).

I found my solution changing the way curl is going to be invoked in [path to nextcloud]/lib/private/Installer.php. In that file you will be able to modify the timeout. I left it in 300 (seconds).

// Download the release
$tempFile = $this->tempManager->getTemporaryFile('.tar.gz');
$timeout = $this->isCLI ? 0 : 120;
$client = $this->clientService->newClient();
$client->get($app['releases'][0]['download'], ['save_to' => $tempFile, 'timeout' => $timeout]);

Hope it helps.

2 Likes

As of today on NC 19 this is still an issue, for the exact same scenario of downloading Collabora Online CODE app. Sad that I have clicked to download several times, and probably fetched many GB in several attempts, only to see them all fail after 120 seconds :frowning:

The issue is not that the receiving end has too low bandwidth, but that the sending side is maxing out.

So no, this issue can hardly be regarded as solved.

2 Likes

Not solved. Still a problem in 20.0.4.

Try changing the timeout limit to 1200 seconds in /var/www/nextcloud/lib/private/Installer.php:

$timeout = this->isCLI ? 0 : 1200;

Of course it needs to be changed again after every upgrade because it will be overwritten by the upgrade.

2 Likes

Maybe the problem is not the timeout itself. Just an idea because I had similar problems and tried to search for a solution in setting the timeouts, too.
If your ISP is DTAG/Telekom, the issue might be described here: https://telekomhilft.telekom.de/t5/Telefonie-Internet/Amazon-AWS-S3-Github-downloads-sehr-langsam-nicht-nutzbar/td-p/4910937/page/120
Just a short summary: peering/routing to github/AWS were the apps are hosted seems to be “quirky”.

I know this seems an old topic, but I did encounter the same issue today. I concur this php source code change to put more ‘0’ behind the ‘1200’ seconds timeout limit helped in my case.

Please do reboot the server or the web service (apache or nginx) for the timeout limit change to take effect.

The one man who tell us what file we should edit. No reboot is required!