Limit download speed

I’m not able to limit the download speed on shared links with nginx.

I added limit_rate 900k; in my server block but I’m still able to download at 60MB/s. I restart nginx of course.

Any solution?


Outside just in a folder this limitation works?

Yes it works outside Nextcloud, I created a test folder and added this config to my server block (below the Nextcloud config) I was limited to 50Kb/s.

location /test/ {
            limit_rate       50k;

Not sure why It’s not working for Nextcloud downloads? Maybe I’m adding it in the wrong location?

Can you post the config?

I added the rule at the end. (Download links contain /download/ in the URL)

upstream php-handler {
    server unix:/run/php/php8.1-fpm.sock;
    #server unix:/var/run/php/php7.4-fpm.sock;

# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
    "" "";
    default "immutable";

server {
    listen 80;
    listen [::]:80;

    # Prevent nginx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;

server {
    listen 443      ssl;
    listen [::]:443 ssl;

    # Path to the root of your installation
    root /home/web/public;
    error_log /home/log/error.log error;

    # Use Mozilla's guidelines for SSL/TLS settings
    ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot

    # Prevent nginx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains;" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 3072M;
    client_body_timeout 3600s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/ application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the `ngx_pagespeed` module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwitdth.
    # See
    # for tunning hints
    client_body_buffer_size 512k;
	http2_body_preread_size 1048576;

    # HTTP response headers borrowed from Nextcloud `.htaccess`
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Download-Options                "noopen"            always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Add .mjs as a file extension for javascript
    # Either include it in the default mime.types list
    # or include you can include that list explicitly and add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types {
        text/javascript js mjs;

    # Specify how to handle directories -- specifying `/index.php$request_uri`
    # here as the fallback means that Nginx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
    # `try_files $uri $uri/ /index.php$request_uri`
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
    # `location ~ /(\.|autotest|...)` which would otherwise handle requests
    # for `/.well-known`.
    location ^~ /.well-known {
        # The rules in this block are an adaptation of the rules
        # in `.htaccess` that concern `/.well-known`.

        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }

        location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
        location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

        # Let Nextcloud's API for `/.well-known` URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;

    # Rules borrowed from `.htaccess` to hide certain paths from clients
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$|/) {
        # Required for legacy support
        rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;

    # Serve static files
    location ~ \.(?:css|js|mjs|svg|gif|png|jpg|ico|wasm|tflite|map)$ {
        try_files $uri /index.php$request_uri;
        add_header Cache-Control "public, max-age=15778463, $asset_immutable";
        access_log off;     # Optional: Don't log access to assets

        location ~ \.wasm$ {
            default_type application/wasm;

    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets

    # Rule borrowed from `.htaccess`
    location /remote {
        return 301 /remote.php$request_uri;

    location / {
        try_files $uri $uri/ /index.php$request_uri;
	if ($bad_referer) { 
    return 444; # emtpy response

location /download/ {
    limit_rate 50k;


For me the download links start with:


or download and check the logfile for the exact URL (if you have something with /download, there is perhaps a redirect and there it is not limited).

The download link don’t start with /remote.php/ on publicly shared links. Only in the files manager.

I tried with /remote.php/ but NC completely ignore my nginx rule.

Sorry, I missed the part with the shared link. That is for my setup for example:

GET /s/3qHR4LTJiSdEAG4/download/ HTTP/2

The bandwidth limit is working for you? Because for me NC completely ignore it no matter where I add it.

I successfully restricted the bandwidth of nextcloud in my Docker on Windows. The nextcloud in my Docker is exposed on port 8081, and I used NetLimiter software to limit the bandwidth of this port. nginx proxies the local port 8081 and still accesses it externally through port 443. If your host is Linux, perhaps you can try using TC to restrict the ports exposed by nextcloud on the internal network

Thank you, but I wish I could restrict it only using Nginx.

A Nextcloud download link from a public share looks like this:

…so I’d say your config is probably not working because there are other things in the URL in front of /download/.

I’m not very good with Regex and I don’t use NGINX, so I’m not a 100% sure if it will work, but the following should match everything after /s/

location ^~ /s/ {
    limit_rate 50k;

Or if you want to be more specific, you could try the following, which should match everything after: /s/<every character that isn't a slash>/download/

location ^~ /s/[^/]+/download/ {
    limit_rate 50k;

Thank you, but it doesn’t seem to be working. I believe we may be putting the rate limiting in the wrong location.

To test the location directive/regex, you could replace limit_rate 50k with deny all. If access getting denied after that, you will know that the location directive is working. If access is still granted, then the location directive isn’t working.

As for the limit_rate function itself, I don’t know enough about NGINX in general and this function in particular to be able to help you with that… Maybe it has to be enabled somwhere in the NGINX config…!?

Deny all doesn’t work either. It confirms that I’m placing the configuration in the wrong location. I think it has to do with how Nextcloud processes the requests. This explains why the rate limit works outside of Nextcloud.

I’m not an expert either. Hopefully, someone who has successfully implemented rate limiting can help.