NextCloud public upload fails over cloudflare

Hello all,
I was wondering if anyone ran into this before.
I am currently running NextCloud AIO latest on linux mint latest.
This is being run through CloudFlare, then to Linux Mint, then to the docker container via HAProxy.
When I upload files while being logged in, I can upload files around 600mb without issue.
If I share a folder as a public upload folder, uploading the same files fails.
If I turn off Cloudflare it goes through without issue.
I have tried ensuring caching and Zara are disabled to make sure that wasn’t a problem.
Going direct to the HAProxy IP or my public IP it works.
So there is something different in the way this traffic is being handled while being logged in vs not logged in.

my HAProxy.conf is below.
Has anyone else ran into this?

global
	log /dev/log	local0
	log /dev/log	local1 notice
	tune.ssl.default-dh-param 2048
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3

defaults
	log		global
	mode	http
	option	httplog
	option	dontlognull
	timeout connect 5s
	timeout client	50s
	timeout server	50s

frontend http
	bind :80
	mode http
	option http-keep-alive
	option forwardfor
	timeout client 30s
	redirect scheme https code 301 if !{ ssl_fc }

frontend TLS_passthrough
	bind :443
	mode tcp
	option tcplog
	tcp-request inspect-delay 5s
	tcp-request content accept if { req_ssl_hello_type 1 } or !{ req_ssl_hello_type 1 }
    acl cloudflare src -f /etc/haproxy/CF_ips.lst
	use_backend tcp_to_https if { req_ssl_sni -m end .domain.tld } cloudflare
	default_backend openvpn
		acl http req.ssl_ver gt 0

backend tcp_to_https
	mode tcp
	timeout connect 600s
	timeout server 600s
	server https 127.0.0.1:8443

frontend https
	bind :8443 ssl crt /etc/ssl/domain.tld/domain.tld.pem
	mode http
	option http-keep-alive
	option forwardfor
	timeout client 600s

	acl acl_remote hdr_beg(host) -i remote
	acl acl_plex hdr_beg(host) -i plex
	acl acl_iis hdr_beg(host) -i iis
	acl acl_cloud hdr_beg(host) -i cloud

	use_backend remote if acl_remote
	use_backend plex if acl_plex
	use_backend iis if acl_iis
	use_backend cloud if acl_cloud

backend openvpn
	mode tcp
	timeout connect 30s
	timeout server 30s
	retries 3
	server openvpn 71.230.220.175:42069

backend plex
	mode http
	balance source
	stick-table type ip size 50k expire 30m
	stick on src
	timeout connect 30s
	timeout server 30s
	http-reuse never
	server plex 10.1.11.204:32400

backend remote
	mode http
	balance source
	stick-table type ip size 50k expire 30m
	stick on src
	timeout connect 30s
	timeout server 30s
	http-reuse never
        http-request set-path /guacamole%[path]
	server remote 10.1.11.200:8080


backend iis
	mode http
	balance source
	stick-table type ip size 50k expire 30m
	stick on src
	timeout connect 60s
	timeout server 60s
	http-reuse never
	server iis 10.1.11.189:443 ssl verify none

backend cloud
	mode http
	balance source
	stick-table type ip size 50k expire 30m
	stick on src
	timeout connect 600s
	timeout server 600s
	http-reuse never
	server iis 127.0.0.1:11000

Any help would be greatly appreciated.

I have also tried downgrading back to 27 instead of 28 and that did not change.
I also tried ensuring Rocket and Caching was disabled which made no change.
I can upload okay once logged in, but if not logged int it only says it can not upload.

Think I found the problem.

It is a limitation that chunking doesn’t work with public uploads.
Just incase anyone else runs into this issue.
Can’t use Cloudflare and public uploads.