HTTP/3 (QUIC) for Nextcloud and Apache — Complete Guide

HTTP/3 (QUIC) for Nextcloud and Apache — Complete Guide

This article is also available as a GitHub Gist for easy bookmarking and reference.

How to enable HTTP/3 on your Nextcloud server: the easy way (Nginx-only), the hybrid way (Nginx + Apache), and the alternatives.

While this guide uses Nextcloud as the reference application, the architecture and configuration apply equally to any web application running behind Nginx or Apache.


Chapter 1 — Concept and Architecture

1.1 Why HTTP/3 Matters

HTTP/3 replaces TCP with the QUIC protocol (UDP-based). The key benefits are not about raw speed on good networks — they are about connection resilience on imperfect ones:

  • Connection Migration — When a mobile device switches from WiFi to cellular, the IP address changes. TCP connections die and must be rebuilt. QUIC identifies connections by a Connection ID, not by IP+port, so the switch is seamless.
  • No Head-of-Line Blocking — In HTTP/2 over TCP, a single lost packet blocks ALL streams on that connection. In HTTP/3, only the affected stream is blocked; all others continue.
  • Faster Connection Setup — QUIC combines the transport and TLS handshake into a single round-trip. With 0-RTT for returning clients, data can be sent immediately.
  • Better Mobile Experience — On lossy cellular networks with 2–5% packet loss, HTTP/3 can reduce page load times by 10–30%.
  • QPACK Header Compression — HTTP/3 uses QPACK, a header compression scheme designed for out-of-order stream delivery. It replaces HTTP/2’s HPACK, which required strict ordering and could not work with QUIC’s independent streams.

On a local network with low latency and no packet loss, you will not notice a difference. The benefits manifest on the last mile — WiFi, cellular, congested networks, and when users switch between them.

1.2 Three Paths to HTTP/3

Not every web server handles HTTP/3 equally. Here is the landscape as of 2026:

Web Server HTTP/3 Support Cost Apache Config Compatible
Nginx ≥ 1.25.0 Native (built-in) Free No
Caddy ≥ 2.x Native (built-in) Free No
LiteSpeed Enterprise Native (since 2019) Proprietary, paid license Yes (full .htaccess + directives)
OpenLiteSpeed Native (built-in) Free, but crippled feature set Partial (rewrite rules only)
Apache httpd Not supported

This guide covers three scenarios:

  1. Nginx-only (Section 2) — You run Nginx as your primary web server. HTTP/3 is just a few lines of config. This is the simplest path and the recommended approach for new setups.
  2. Hybrid: Nginx + Apache (Section 3) — You run Apache and want to keep it. Nginx is added as a QUIC-only UDP proxy alongside Apache. This is the focus of this guide.
  3. Alternatives (Section 4) — A note on LiteSpeed (proprietary, paid).

1.3 Why It Is Legitimate to Keep Using Apache

Before diving into the hybrid approach, let’s address the elephant in the room: “Why not just switch to Nginx?”

There are valid reasons to stay on Apache:

  • Mature ecosystem — ModSecurity with OWASP CRS, mod_rewrite, mod_proxy, mod_lua, mod_xsendfile, mod_evasive, mod_security2 — decades of battle-tested modules.
  • Per-directory configuration.htaccess files allow applications to ship their own server rules. Nextcloud, WordPress, and many PHP applications rely on this.
  • Complex existing setups — If your server runs multiple applications with years of accumulated Apache configuration (VirtualHosts, Lua hooks, admin panels, custom modules), migrating to Nginx means rewriting everything from scratch.
  • Organizational knowledge — Your team knows Apache. Retraining and rewriting documentation has a real cost.
  • It works — Apache is not slow. It handles HTTP/1.1 and HTTP/2 perfectly well. The only thing it cannot do is HTTP/3.

The hybrid approach described in this guide lets you add HTTP/3 without touching any of that. Apache stays your primary server; Nginx handles only QUIC.


Chapter 2 — Nginx-Only: HTTP/3 the Simple Way

If you already run Nginx (≥ 1.25.0) as your web server, enabling HTTP/3 is straightforward. You only need four changes to the official Nextcloud Nginx configuration:

2.1 Changes to the Standard Nextcloud Nginx Config

Starting from the official Nextcloud configuration template, make these modifications in the SSL server block:

a) Replace the listen directives

The official config still shows the old syntax:

    listen 443 ssl http2;
    listen [::]:443 ssl http2;

Replace with (for Nginx ≥ 1.25.1):

    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;

    # Add QUIC/HTTP3 listeners
    listen 443 quic reuseport;
    listen [::]:443 quic reuseport;
    http3 on;

b) Add the Alt-Svc header

In the server block, alongside the other add_header directives:

    # Advertise HTTP/3 availability to browsers
    add_header Alt-Svc 'h3=":443"; ma=86400' always;

Important: Nginx does not inherit add_header directives into location blocks that define their own add_header. The official Nextcloud config has a location block for static assets (.css, .js, .svg, etc.) with its own add_header lines. You must also add the Alt-Svc header inside that block, otherwise static assets served from cache will not advertise HTTP/3:

    location ~ \.(?:css|js|mjs|svg|gif|ico|jpg|png|webp|wasm|tflite|map|ogg|flac|mp4|webm)$ {
        try_files $uri /index.php$request_uri;
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header Alt-Svc 'h3=":443"; ma=86400' always;   # <-- add this
        access_log off;
    }

c) Firewall and Router

Ensure UDP port 443 is open in your firewall (if any) and forwarded in your router — in addition to the existing TCP/443 forwarding.

d) Apply the QUIC performance tuning

See Section 3.9 for the full set of tuning directives (http3_stream_buffer_size, quic_gso, ssl_early_data, kernel UDP buffers). These apply equally to Nginx-only and hybrid setups.

That’s it. Reload Nginx, and your Nextcloud instance serves HTTP/3 natively.

2.2 Note on the Official Nextcloud Nginx Config

The official Nextcloud Nginx configuration is community-maintained and a solid starting point. A few things to be aware of:

  • The listen directive syntax still shows the pre-1.25.1 style (listen 443 ssl http2). With Nginx ≥ 1.25.1, use the separated http2 on; directive as shown above.
  • The config does not include HTTP/3 directives — you add them yourself as described above.
  • Always verify against the latest version of the documentation, as the config evolves with each Nextcloud release.

Chapter 3 — Hybrid: Adding HTTP/3 to Apache via Nginx

3.1 The Problem: Apache and HTTP/3

Apache httpd has no native HTTP/3 support and no published roadmap to add it. This is not a missing module — it is an architectural limitation. Apache’s entire I/O model is built around TCP. HTTP/2 was possible (via mod_http2 by Stefan Eissing) because it still runs over TCP. HTTP/3 requires UDP (QUIC), and retrofitting Apache’s internals for UDP would require a fundamental rewrite.

There is no mod_http3 or mod_quic for Apache httpd. Various blog posts and AI-generated content claim otherwise — the referenced GitHub URLs return 404. See References for a detailed debunking.

3.2 The Hybrid Architecture

Keep Apache as your primary server for all TCP traffic, and add Nginx only for UDP/QUIC traffic. Both listen on port 443 but on different protocols — TCP and UDP do not conflict on the same port.

Path Protocol Server Port
Direct HTTPS TCP Apache 443
HTTP/3 via QUIC UDP Nginx → Apache 443 → 8080
HTTP redirect TCP Apache 80

Key properties:

  • Apache serves TCP/80 and TCP/443 — unchanged
  • Nginx serves UDP/443 only — nothing else
  • Nginx forwards QUIC requests to Apache on 127.0.0.1:8080 (plain HTTP, no double TLS)
  • Nginx handles the full QUIC stack: TLS 1.3, QPACK header compression, stream multiplexing — Apache receives plain HTTP/1.1 on localhost
  • Apache’s entire configuration remains untouched
  • Cookies, sessions, and application state work transparently

3.3 Architecture Diagrams

High-Level Traffic Flow

flowchart TD
    Client([Client / Browser])

    Client -->|"① TCP/443 (HTTP/1.1, HTTP/2)"| Apache["Apache 2.4\nTCP/80 + TCP/443\n(primary server)"]
    Client -->|"② UDP/443 (HTTP/3 / QUIC)"| Nginx["Nginx ≥ 1.25.0\nUDP/443 only\n(QUIC proxy)"]
    Nginx -->|"③ plain HTTP/1.1\n127.0.0.1:8080"| Apache

    Apache --> App["Application\n(Nextcloud, WordPress, etc.)"]

    style Nginx fill:#4a9,stroke:#333,color:#fff
    style Apache fill:#e74,stroke:#333,color:#fff
    style Client fill:#58f,stroke:#333,color:#fff
    style App fill:#f90,stroke:#333,color:#fff

Connection Lifecycle

sequenceDiagram
    participant B as Browser
    participant A as Apache (TCP/443)
    participant N as Nginx (UDP/443)

    B->>A: First visit - HTTPS request via TCP
    A->>B: Response with Alt-Svc header
    Note over B: Browser learns that h3 is available

    B->>N: Next visit - QUIC/HTTP3 request via UDP
    N->>A: Forward via HTTP/1.1 to 127.0.0.1 port 8080
    A->>N: Response via plain HTTP
    N->>B: Response via QUIC/HTTP3

    Note over B: If UDP is blocked
    B->>A: Automatic fallback to TCP/443 via HTTP/2

Port Allocation

flowchart LR
    subgraph "Port 443"
        direction TB
        TCP["TCP/443\n→ Apache"]
        UDP["UDP/443\n→ Nginx"]
    end
    subgraph "Port 80"
        TCP80["TCP/80\n→ Apache\n(redirect to HTTPS)"]
    end
    subgraph "Port 8080 (loopback only)"
        INT["TCP/8080\n→ Apache internal\n(QUIC backend)"]
    end

    UDP -.->|"proxy_pass"| INT

    style TCP fill:#e74,stroke:#333,color:#fff
    style UDP fill:#4a9,stroke:#333,color:#fff
    style TCP80 fill:#e74,stroke:#333,color:#fff
    style INT fill:#fa0,stroke:#333,color:#fff

3.4 Prerequisites

  • A Linux server running Apache 2.4.x with working HTTPS (Let’s Encrypt or other valid TLS certificate). Self-signed certificates will not work with HTTP/3 in browsers.
  • Root or sudo access.
  • Nginx ≥ 1.25.0 — the version that introduced stable QUIC/HTTP/3 support. The http_v3_module must be compiled in.
  • UDP port 443 forwarded in your router/firewall.

3.5 Install Nginx with HTTP/3 Support

Minimum Version Requirement

HTTP/3 (QUIC) support requires Nginx ≥ 1.25.0. The http_v3_module is included by default in mainline packages from nginx.org since that version. Verify after installation:

nginx -V 2>&1 | grep -o 'http_v3_module'
# Expected output: http_v3_module

Important: Distribution-default Nginx packages are often too old. Debian 11 ships 1.18, Debian 12 ships 1.22, Ubuntu 22.04 ships 1.18 — none support HTTP/3. Use the official nginx.org mainline packages.

Debian and Ubuntu (apt)

sudo apt install -y curl gnupg2 ca-certificates lsb-release

curl https://nginx.org/keys/nginx_signing.key \
  | gpg --dearmor \
  | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg > /dev/null

# Automatically detects Debian vs Ubuntu and the correct codename.
# Ubuntu codenames: jammy (22.04), noble (24.04)
# Debian codenames: bullseye (11), bookworm (12), trixie (13)
sudo tee /etc/apt/sources.list.d/nginx.sources << EOF
Types: deb
URIs: http://nginx.org/packages/mainline/$(. /etc/os-release && echo "$ID")
Suites: $(lsb_release -cs)
Components: nginx
Signed-By: /usr/share/keyrings/nginx-archive-keyring.gpg
EOF

sudo apt update && sudo apt install nginx

Other Distributions

Nginx.org provides official mainline packages for RHEL/CentOS, Fedora, SUSE/OpenSUSE, Alpine, and Amazon Linux. Consult the official installation guide for your package manager:

nginx: Linux packages

The critical requirement is always the same: Nginx ≥ 1.25.0 with http_v3_module.

Post-Installation Setup (Hybrid Only)

For the hybrid setup, prevent Nginx from claiming TCP ports that Apache uses:

sudo systemctl stop nginx
sudo systemctl disable nginx
sudo rm -f /etc/nginx/conf.d/default.conf

Set Nginx to run as the same user as Apache so it can read TLS certificates. In /etc/nginx/nginx.conf:

user  www-data;    # Debian/Ubuntu (match your Apache user)
# user  apache;    # RHEL/CentOS

3.6 Configure Apache — Internal Backend

a) Add internal listen port

In your Apache ports configuration (/etc/apache2/ports.conf on Debian/Ubuntu, /etc/httpd/conf/httpd.conf on RHEL/CentOS):

# Internal HTTP port for Nginx QUIC proxy (loopback only)
Listen 127.0.0.1:8080

b) Create the internal VirtualHost

Create /etc/apache2/sites-available/000-default-intern-quic.conf:

# Internal backend for Nginx QUIC/HTTP3 proxy.
# Only accessible from loopback — never exposed to the internet.
<VirtualHost 127.0.0.1:8080>
    ServerName example.com
    DocumentRoot /var/www/nextcloud
    ServerAdmin admin@example.com

    # Replace 127.0.0.1 with the real client IP from Nginx.
    # NOTE: Use RemoteIPInternalProxy (not RemoteIPTrustedProxy)
    #       to fully replace the source IP in logs and $_SERVER.
    RemoteIPHeader X-Forwarded-For
    RemoteIPInternalProxy 127.0.0.1

    # Tell Apache and PHP that the original request was HTTPS
    SetEnvIf X-Forwarded-Proto "https" HTTPS=on
    RequestHeader set X-Forwarded-Proto "https"

    CustomLog ${APACHE_LOG_DIR}/access.log combined

    <IfModule mod_proxy.c>
        ProxyPreserveHost On
        ProxyTimeout 300
    </IfModule>

    <IfModule mod_rewrite.c>
        RewriteEngine On
    </IfModule>

    # -------------------------------------------------------
    # CRITICAL: Include the SAME application configuration
    # as your main SSL VirtualHost. This ensures identical
    # behavior for requests arriving via QUIC.
    #
    # Copy the Include lines from your existing SSL VHost:
    #   Include /etc/apache2/includes/nextcloud.conf
    #   Include /etc/apache2/includes/nextcloud_client-push.conf
    # -------------------------------------------------------
</VirtualHost>

The Includes are critical. This VirtualHost must mirror your main SSL VHost exactly. Only omit SSL-specific directives (SSLEngine, SSLCertificateFile, etc.).

Note on Nextcloud configuration structure: The standard Nextcloud documentation creates a dedicated VirtualHost file that is enabled as a site via a2ensite. The Nextcloud-specific directives (directory context) live inside that VirtualHost, making them hard to reuse. If instead you keep your Nextcloud configuration in separate includable files (e.g. /etc/apache2/includes/nextcloud.conf) and load them via Include from within your VirtualHost, duplicating the config for the internal QUIC backend becomes trivial — just add the same Include lines. This is the approach assumed in this guide.

A further optimization is to set AllowOverride None and include Nextcloud’s .htaccess rules statically in the directory context. This eliminates the per-request .htaccess lookup overhead. The trade-off is that Apache must be restarted after Nextcloud updates — a minor inconvenience. This approach also benefits the hybrid setup, as the same static includes work identically in both VirtualHosts.

c) Enable modules and site

On Debian/Ubuntu:

sudo a2enmod remoteip headers
sudo a2ensite 000-default-intern-quic

On RHEL/CentOS, ensure mod_remoteip and mod_headers are loaded.

3.7 Add Alt-Svc Header to Apache

In your existing SSL VirtualHost (TCP/443), add the Alt-Svc header. This is the only change to your existing Apache configuration:

<IfModule mod_headers.c>
    Header always set Alt-Svc 'h3=":443"; ma=86400'
</IfModule>

3.8 Configure Nginx — QUIC-Only Proxy

Create /etc/nginx/conf.d/quic-proxy.conf:

server {
    # QUIC/HTTP3 only — Apache handles all TCP traffic directly.
    # No "listen 443 ssl" — that would conflict with Apache.
    listen 443 quic reuseport;
    listen [::]:443 quic reuseport;
    http3 on;

    server_name example.com;

    # Use the same TLS certificate as Apache
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # --- QUIC performance tuning (see Section 3.9) ---
    http3_stream_buffer_size 1m;
    quic_gso on;
    ssl_early_data on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;

    # Advertise HTTP/3 availability
    add_header Alt-Svc 'h3=":443"; ma=86400' always;

    location / {
        proxy_pass http://127.0.0.1:8080;

        # Use HTTP/1.1 for backend connections (Nginx defaults to 1.0)
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        # Hide Nginx server header in proxied responses
        proxy_hide_header Server;

        # Pass original client info
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;

        # Prevent temporary file buffering for large responses
        proxy_buffers 32 256k;
        proxy_buffer_size 256k;
        proxy_busy_buffers_size 512k;

        # Large uploads and long timeouts for Nextcloud
        client_max_body_size 16G;
        proxy_request_buffering off;
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}

Note on WebSocket: If your application uses WebSocket (e.g. Nextcloud notify_push), you do not need a WebSocket block. WebSocket runs over TCP — it goes directly to Apache on TCP/443 and never reaches Nginx.

3.9 QUIC Performance Tuning

These settings apply to both Nginx-only and hybrid setups.

Nginx directives

Directive Default Recommended Purpose
http3_stream_buffer_size 64k 1m Per-stream QUIC buffer. Default causes stalling on file transfers. 1m = good balance. 100 connections = ~100 MB RAM.
quic_gso off on Kernel batches UDP packets in one syscall. Less CPU on large transfers. Linux 4.18+.
ssl_early_data off on 0-RTT for returning clients. Saves one round-trip. Safe with CSRF-protected apps.
ssl_session_cache none shared:SSL:10m Enables session resumption. 10 MB ≈ 40,000 sessions.
ssl_session_timeout 5m 1d How long sessions stay valid.
proxy_http_version 1.0 1.1 Enables keepalive to backend (hybrid only).
proxy_buffers 8 4k 32 256k Prevents buffering to disk for large responses (hybrid only).

Kernel UDP buffers

The Linux defaults (~208 KB) are too small. Create /etc/sysctl.d/99-quic.conf:

# Recommended by Google's QUIC team. Applies to all UDP sockets.
net.core.rmem_max = 2500000
net.core.wmem_max = 2500000
sudo sysctl --system
sysctl net.core.rmem_max net.core.wmem_max

3.10 Configure Nextcloud

Add to config/config.php:

'trusted_proxies' => ['127.0.0.1'],
'forwarded_for_headers' => ['HTTP_X_FORWARDED_FOR'],

3.11 Firewall and Router

Ensure UDP port 443 is open in your firewall (if any) and forwarded in your router — in addition to the existing TCP/443 forwarding. Without this, QUIC traffic will never reach Nginx.

3.12 Activate

# Apache
sudo apachectl configtest && sudo systemctl restart apache2

# Nginx
sudo nginx -t && sudo systemctl enable nginx && sudo systemctl start nginx

# Verify
ss -tlnp | grep -E '(:443|:8080)'   # Apache on TCP
ss -ulnp | grep 443                  # Nginx on UDP

3.13 Testing

curl with HTTP/3

Standard system curl usually lacks HTTP/3 support. Options:

# Snap (Ubuntu, Debian with snapd)
sudo snap install curl
snap run curl --http3-only -kv -L -o /dev/null https://example.com

# Docker (any distribution)
docker run --rm --net=host ymuski/curl-http3 \
  curl --http3-only -kv -L -o /dev/null https://example.com

Browser test

  1. Open your site in Chrome, Firefox, Edge, or Opera.
  2. DevTools (F12) → Network tab → Protocol column.
  3. First load: h2 (HTTP/2 via TCP). Reload: h3 (HTTP/3 via QUIC).

Verify both logs

# Nginx: HTTP/3.0, real client IP
tail -f /var/log/nginx/access.log

# Apache: HTTP/1.1, real client IP (not 127.0.0.1)
tail -f /var/log/apache2/access.log

3.14 How Cookies and Sessions Work

Cookies are bound to the domain, not the server process. Whether a request arrives via TCP or UDP→TCP, the browser sends the same cookies. Apache processes the same session store regardless of which port the request arrived on. The application cannot tell which path a request took.

3.15 Why Not a Full Reverse Proxy?

A common approach is to place a reverse proxy — Nginx, Caddy, or a GUI tool like Nginx Proxy Manager — in front of Apache for all traffic, handling TLS termination and HTTP/3 at the edge. This works, and it’s how most Caddy-based and Nextcloud All-in-One setups operate. The backend connection to Apache then runs plain HTTP on localhost or the local network, where the extra hop is negligible.

So why not do that here?

The honest answer: it depends on your situation. If you are starting fresh or already run a reverse proxy in front of Apache, adding HTTP/3 there is the simplest path. Caddy does it automatically out of the box. Nginx requires a few lines of config. Nginx Proxy Manager (NPM) does not support HTTP/3 yet — there is a draft PR, but it has not been merged as of March 2026. This is because NPM is based on OpenResty, a Lua-scriptable derivative of Nginx. OpenResty inherits features from Nginx mainline with a delay, which puts NPM at a third-hand position when it comes to adopting new protocols. HTTP/3 support in NPM is expected in the near future.

The hybrid approach exists for a different scenario: you have a mature, complex Apache setup that you do not want to put behind a proxy. Your ModSecurity rules with OWASP CRS reference the direct client IP, your mod_lua hooks run application logic inside Apache, your admin panel binds to Apache’s process — and wrapping all of that behind a reverse proxy means reworking things that currently work fine.

In that case, the hybrid approach gives you HTTP/3 by adding Nginx next to Apache (UDP only) instead of in front of it (all traffic). Apache keeps full control over TCP. Nothing changes for existing clients. The QUIC path is purely additive.

Neither approach is wrong. The full reverse proxy is simpler to reason about. The hybrid approach is less invasive for existing setups.

The Localhost TCP “Non-Problem”

Head-of-Line blocking is reintroduced between Nginx and Apache on the backend connection. But this is TCP on localhost — zero latency, zero packet loss, gigabits of bandwidth. The HOL blocking problem exists on the last mile, not on loopback. QUIC benefits happen where they matter: between client and server edge.


Chapter 4 — A Note on LiteSpeed

For completeness: LiteSpeed Web Server is the only other server with full Apache configuration compatibility (.htaccess, httpd.conf directives) and native HTTP/3 support. It was the first web server to ship production-grade QUIC (2019).

However, LiteSpeed Enterprise is closed-source, proprietary software with a paid license model starting at $10/month. For any serious use beyond a single domain on a 2 GB server, you pay. That makes it a non-starter for anyone committed to running a free and open-source stack.

OpenLiteSpeed exists as a free edition but is severely limited: no Apache directive support (only rewrite rules), requires restarts for .htaccess changes, and lacks key features of the Enterprise version. It is open source in name, but the useful parts are behind the paywall.

This guide uses Nginx (free, open source, available everywhere) and Apache (free, open source, battle-tested for decades). Both are genuine FOSS. The hybrid approach achieves the same end result — HTTP/3 for your clients — without paying a license fee or trusting proprietary code with your traffic.


Troubleshooting

Browser shows h2 but never h3

  • Verify the Alt-Svc header: curl -kI https://example.com | grep -i alt-svc
  • Verify Nginx listens on UDP/443: ss -ulnp | grep 443
  • Verify firewall and router forward UDP/443.
  • Try an incognito/private window.
  • Self-signed certificates prevent HTTP/3 in browsers.

h3 stopped working after Nginx restart

The browser caches a failed QUIC attempt and avoids it temporarily:

  • Chrome: chrome://net-internals/#alt-svc → Clear alternative services
  • Firefox: restart the browser
  • Or wait a few minutes — browsers retry automatically.

Apache log shows 127.0.0.1 instead of real client IP

  • Use RemoteIPInternalProxynot RemoteIPTrustedProxy.
  • Verify: apachectl -M | grep remoteip

Apache log shows HTTP/1.0 instead of HTTP/1.1

  • Ensure proxy_http_version 1.1; and proxy_set_header Connection ""; are in Nginx’s location block.
  • Fully stop and start Nginx (not reload): sudo systemctl stop nginx && sudo systemctl start nginx

Nginx warns “upstream response is buffered to a temporary file”

Increase proxy buffers:

proxy_buffers 32 256k;
proxy_buffer_size 256k;
proxy_busy_buffers_size 512k;

ModSecurity blocks requests on port 8080

In my testing with OWASP CRS 3.3.2, no ModSecurity exceptions were needed. Nginx sets the correct Host header via proxy_set_header Host $host;, so host-header validation rules (900021, 900022) do not trigger on QUIC-proxied requests. Test without exceptions first — you likely don’t need any.

If your specific CRS configuration does block requests on port 8080, be cautious about blanket exceptions. Bypassing ModSecurity rules on the internal port weakens the protection that is one of the main reasons for keeping Apache in the first place. If you must whitelist, target only the specific rule that triggers and document why:

SecRule SERVER_PORT "@streq 8080" \
    "id:1000070, phase:1, pass, nolog, \
     ctl:ruleRemoveById=900021, \
     msg:'Internal QUIC proxy port - Host header validated by Nginx'"

References

Nginx

Reverse Proxies (discussed in Section 3.15)

Nextcloud

HTTP/3 and QUIC

LiteSpeed (proprietary alternative, mentioned in Chapter 4)

Apache and HTTP/3


This guide was developed and tested in March 2026 with Apache 2.4.66, Nginx 1.29.6 (mainline), Nextcloud 32 and 33. The hybrid architecture has been verified with ModSecurity CRS, Nextcloud notify_push, Collabora Online, and complex Apache configurations.


8 Likes

Cheers for that great writeup!
I was wondering why you don’t put everyhting behind NGINX, until I saw your section about that :grinning_face_with_smiling_eyes:

Only thing that is IMHO missing is this from the official docs:

The ngx_http_v3_module module (1.25.0) provides experimental support for HTTP/3.

Hi @saettel.beifuss0,

That was 3 years ago now:

Meanwhile, latest is 1.29.6

And in the part where the HTTP/3 stuff is located, hardly anything is being changed. That’s a very conservative classification as “experimental.” That sounds more like an indication that Red Bull isn’t really giving you wings. :wink:


ernolf

Could be 10 years ago, that does not mean automatically that much has changed. :wink:

1 Like

What are the benefits of HTTP/3 when NGINX is only used as a proxy for Apache? Yes NGINX will accept incoming HTTP/3 connections, but in the end all requests are still handled by Apache via HTTP.

Did you do any performance tests to compare using Apache with HTTP/2 as main server compared to NGINX as proxy with HTTP/3 and Apache behind that?

@ernolf Thank you very much for this fantastic guide! I just quickly copied and pasted it into my Apache setup on a test instance, and what can I say, it seems to work. :+1:

You don’t need to translate the entire Nextcloud .htaccess file to NGINX and keep track of its changes. In other words, it is easier than handeling everything in NGINX.

That was explained here: HTTP/3 (QUIC) for Nextcloud and Apache — Complete Guide

Comparative benchmarks would definitely still be interesting, though. :slight_smile:

@awelzel probably assumes that the NGINX reverse proxy is not running on the same host as Nextcloud.

I would guess that from a performance standpoint, it is still the same improvement between edge and the client. Edge to client propbably does not matter much if the backend is localhost or a proxy pass in LAN.

It depends on the LAN, I guess. In large environments, there can certainly be latency and bandwidth bottlenecks, but yes, of course, normally, the benefits of QUIC on the LAN are probably just as negligible as they are on a loopback interface.

I’m no expert, though, and I can’t really say whether QUIC offers other advantages that only come into play when an application is served directly via a QUIC-compatible web server, or if the application itself somehow supports QUIC natively. I’m fully trusting @ernolf’s assessment on this.

Either way, using the Apache/NGINX mixed configuration on the same server really only makes sense if you want to keep Apache. And there are good reasons for that: firstly, the one I mentioned, but also because it’s still the web server recommended by Nextcloud (even AIO uses it while adding QUIC via Caddy). And the reason for that, in turn, is likely closely related to the reason I mentioned. :wink:

1 Like

No, I am not talking about having NGINX and Apache on the same host - this can be solved. I was wondering, how HTTP/3 increases performance, if the backend is still Apache and does not use HTTP/3 but just HTTP 1.1 (or HTTP/2 using h2c). The way how Apache handles requests will not change.

Sorry, but in the guide I can not find a specific explanation why NGINX as a proxy in front of Apache increases performance and stability instead of using Apache only. Can you quote the relevant part of it? Thanks.

Hi @awelzel,

HTTP3 isn’t about performance. That was already achieved with HTTP2. HTTP3 is about connection stability. Please read the article again. All those aspects are mentioned.

1 Like

Yeah but only by not using Apache, and serving everything via NGINX or another QUIC-Compatible web server.

Yeah, but I don’t think that’s the key point here, or at least that’s how I understand it. HTTP/3 only really has advantages under low high-latency conditions, not necessarily on a LAN, and certainly not on a loopback interface where latency is practically zero and bandwidth is several gigabits, depending on CPU power, of course. Again, that’s my takeaway.

Another issue is how efficiently web servers generally handle requests, and it’s quite possible that Apache is generally slower than NGINX even with HTTP/1 and 2 in certain situations. But again, that usually only really comes into play in large environments with tens of thousands of requests, and not so much on our home or SMB servers.

And I’m stretching it a bit here, and I realize this is a very vague and technically inaccurate way of putting it (since I’m no expert). But if, for example, the Photos app has to go through five PHP loops before it can respond to a request, QUIC probably won’t be able to save the day either. :wink:

Actually, it’s the opposite. HTTP/3 shines precisely under high-latency, lossy, and unstable conditions — the kind you encounter on mobile networks, WiFi with packet loss, or when switching between networks (e.g. driving through a tunnel and your phone jumps from WiFi to cellular).

The key advantages kick in where TCP struggles most:

  • Connection Migration — TCP connections die when your IP changes (network switch). QUIC keeps the connection alive because it’s tied to a Connection ID, not to the IP/port tuple.
  • No Head-of-Line Blocking — on a lossy link with 2–5% packet loss, a single lost TCP packet stalls ALL streams in HTTP/2. In HTTP/3, only the affected stream is blocked; everything else continues.
  • Faster handshake — on a high-latency link (say 150ms RTT on cellular), saving one round-trip on the TLS handshake is noticeable. With 0-RTT for returning clients, data flows immediately.

On a low-latency, zero-loss local network, you won’t notice any difference between HTTP/2 and HTTP/3. That’s precisely because the problems HTTP/3 solves don’t exist there.

2 Likes

Of course, I meant high latency.

Note to self: “Lower is better when it comes to latency.” :wink:

But I’m glad I made that mistake if it led to a much better explanation. :slightly_smiling_face: :+1:

1 Like

Thanks for the clarification. So the idea is, to provide HTTP/3 via UDP to improve stability in certain situations which is still the case, when the backend is Apache and NGINX is used as proxy. That makes sense.

Head-of-Line Blocking — the core problem with HTTP/2

Think of a single-lane highway. That’s your TCP connection. HTTP/2 sends multiple requests over that one connection simultaneously — multiplexing. Request A (an image), Request B (a CSS file), Request C (a JavaScript file) — all running in parallel over the same TCP connection.

Now a single TCP packet belonging to Request B gets lost. What happens? TCP guarantees byte order. So TCP waits until the lost packet has been retransmitted before handing any further data to the application. This means Request A and Request C, which arrived completely and have nothing to do with the lost packet, are stuck. They’re waiting for a packet that doesn’t even concern them.

That’s Head-of-Line Blocking. At 0% packet loss (LAN), this never happens — which is why you don’t notice any difference there. At 2–5% packet loss (cellular, dodgy WiFi), it happens constantly, and every time it does, the entire page stalls.

HTTP/3 solves this because QUIC treats each request as an independent stream. If a packet from Stream B is lost, only Stream B’s data waits. Streams A and C are delivered to the application immediately. The streams are isolated from each other.

Connection Migration — why TCP dies on network switches

A TCP connection is identified by four values: source IP, source port, destination IP, destination port. Change any one of them, and the connection is invalid.

You’re on a train, your phone is connected to cellular. Your IP is something like 100.72.88.15
You enter a tunnel, cellular drops, phone switches to the onboard WiFi. New IP: 10.0.1.42.
As far as TCP is concerned, this is a completely new identity. All existing TCP connections are dead. The browser has to:

  1. Establish a new TCP connection (1 round-trip)
  2. Perform a TLS handshake (1–2 round-trips)
  3. Re-establish the HTTP session

At 150ms latency on cellular, that’s 300–450ms of dead air. And if you were in the middle of uploading a file, the upload starts over from scratch.

QUIC identifies connections by a Connection ID — a random number that has nothing to do with the IP address. Your phone changes IP? Doesn’t matter. It sends the next QUIC packet with the same Connection ID from the new IP. The server recognizes the Connection ID and carries on seamlessly. No new handshake, no data loss, no restarted uploads.

Handshake speed — why latency matters

With TCP + TLS 1.3, establishing a connection takes:

  1. TCP handshake — SYN → SYN-ACK → ACK = 1 round-trip
  2. TLS 1.3 handshake — ClientHello → ServerHello + Finished = 1 round-trip
  3. Only now does actual data flow

That’s 2 round-trips before the first byte of payload.

With QUIC:

  1. QUIC + TLS combined — the crypto handshake is built into the transport. ClientHello + crypto + transport parameters in 1 round-trip.
  2. Data flows immediately after that first round-trip.

That’s 1 instead of 2 round-trips. On a LAN with 0.5ms latency, you save 0.5ms — irrelevant. On cellular with 150ms latency, you save 150ms — noticeable.

Then there’s 0-RTT: if the client has connected before, it has a session ticket stored from the server. On the next connection, it sends the ticket along and starts transmitting data in the very first packet, without waiting for a reply. Zero round-trips until payload flows. This is especially relevant for mobile devices that reconnect constantly — your phone running a Nextcloud sync, the Talk app recovering after driving through a tunnel.

QPACK — header compression for unordered streams

HTTP/2 uses HPACK for HTTP header compression. HPACK works with a dynamic table: “I’ve sent this header before, I’ll just reference index 42 in the table.” This saves enormous bandwidth because headers like Cookie, User-Agent, and Accept-Encoding are nearly identical on every request.

The problem: HPACK assumes compressed headers arrive in the correct order. Encoder and decoder must have the same state of the dynamic table. If packet 3 arrives before packet 2, the decoder can’t decompress packet 3 because it’s missing the table entry from packet 2.

With HTTP/2 over TCP, this isn’t an issue — TCP guarantees ordering. But QUIC delivers streams independently and out of order. That’s the whole point (no Head-of-Line Blocking). HPACK would break in this environment.

QPACK solves this with a dedicated, unidirectional stream solely for table updates. The decoder knows which table entries it has received so far and can decompress headers as soon as the required entries are available — regardless of the order other streams arrive in.

Why none of this matters on localhost

Between Nginx and Apache on 127.0.0.1:

  • Latency: 0.01ms → handshake savings irrelevant
  • Packet loss: 0% → Head-of-Line Blocking never occurs
  • Network switches: impossible → Connection Migration unnecessary
  • Bandwidth: gigabits → header compression saves nothing measurable

This is why HTTP/1.1 on localhost is just as fast as HTTP/3 would be. The QUIC advantages only exist on the path between client and server — and that’s exactly where Nginx handles them.


ernolf

1 Like

@ernolf thank you for this very good and technically brilliant guide - like many others authored by you :hugs:

in my eyes there are too many details so especially people without good technical background can not understand the point. personally I would recommend to separate different variants for easier understanding.

and to be honest - I saw you arguments and they are valid - I would avoid the variant with parallel nginx and Apache. technically valid but too complicated in real life - with much more complex operations and troubleshooting - so I would rather completely switch to nginx if in doubt.

and just for the sake of completeness as traefik user I obviously miss my beloved tool - use traefik 3.x as reverse proxy in front of your Nextcloud (along other advantages like built-in letsencrypt support, crowdsec plugin etc..) - http3 activation is simple, and not experimental since v3 anymore

simply add few lines to your existing traefik docker compose.yml

services:

  traefik:
    image: traefik:v3
    command:
     ...
      - "--entrypoints.<name>.http3"
    ports:
    ...
      - 443:443/udp
1 Like

No, please don’t!

I mean, ‘HTTP/3 (QUIC) for Nextcloud and Apache’ is literally the title, and that’s what makes this guide special, in my humble opinion, besides all the excellent information about QUIC that it provides, of course.

This is probably the variant that I’m going to implement. Otherwise, I’d have to switch completely to NGINX or another HTTP/3-compatible web server, which wouldn’t make configuration any easier overall. Plus, it would definitely be harder to maintain, since NGINX and other web servers don’t support .htaccess files.

And honestly… Why does every guide have to be beginner-friendly and as simple as possible? The information in this thread is great for anyone who actually wants to learn something that goes a bit beyond the usual run-of-the-mill guides. At least I’ve learned a lot just by reading through it. If someone doesn’t want that, then the guide simply isn’t for them. So what!?

Not everyone uses Docker for everything. There are still users out there like me who run a classic LAMP stack. :wink:

But yeah, I get it—it’s a fairly niche use case, since many people run a separate reverse proxy in front of all their self-hosted services anyway. In that case, they could just use NGINX there, or something like Caddy (or Traefik :wink:), which also supports HTTP/3—problem solved.

That’s my plan for the medium term as well, but it will be part of a bigger change in my infrastructure that I still need to prepare for. In the meantime, though, this guide is perfect for enabling HTTP/3 for my Nextcloud instance without requiring any other changes.

Oh, and by the way: AIO also uses a hybrid approach. If I’m not mistaken, they use Apache and Caddy instead of Apache and NGINX.

1 Like

I started implementing NGINX as proxy for everything in front of Apache. That is much easier than keeping Apache for HTTP/2, since then you only have to take care of NGINX when it comes to updating TLS certificates, while Apache will only provide HTTP. Using a a few additional lines and the remoteip module, you still get the clients IP address behind the proxy.

Relevant part in Apache virtual host setup:

<VirtualHost *:8080>
        RemoteIPProxyProtocol On
        RemoteIPProxyProtocolExceptions 127.0.0.1
        RemoteIPHeader X-Forwarded-For

In addition you also need to take care of websocket forwarding in NGINX. I did not complete my setup yet, but I think, not mixing Apache and NGINX for the frontend side makes things much easier.

1 Like