I ran OpenSpeedTest from my PC to my Raspberry Pi 4B, both connected via LAN to my WiFi router. The left screenshot shows the speedtest via local_ip:3000, and I’m getting the expected 1 Gbps up/down.
The right screenshot shows the speedtest via https://speed.mydomain.com. I’m confident that the connection from my PC to my home server is routed internally and not through the internet because my lowest ping to the nearest Speedtest server (my own ISP) on speedtest.net is 6ms, and my internet speed is 100 Mbps up/down. So the traffic must be routing internally.
Is there typically such a massive difference between using http://local_ip:3000 and https://speed.mydomain.com?
Additional context: The speedtest server is running via Docker Compose. I’m using Nginx (native, not Docker) to access these services from outside my network.
The port is forwarded from your router to the pi, right? If so, you could test for the router as the bottleneck using the router’s WAN side IP address as the target.
This should give you a good data point for comparison. If it’s also slow then you can focus on the router performance. Some are slow when doing hairpin NAT.
The difference might be HTTP vs HTTPS. On a Pi the extra CPU load to properly encrypt the HTTPS stream is probably significant.
So you have local DNS set up?
If you ping (or dig) speed.mydomain.local, does it resolve the same address as local_ip?
Considering you are accessing local_ip:3000 and the domain on port 443, there is clearly a firewall somewhere redirecting packets or a reverse proxy on the domain but not on local_ip:3000Follow the port chain, forwarding, proxying etc. One of those will be bottlenecking. Then figure out why
Edit:
Just because your ISP speed is 100mbps and you are seeing 500mbps, doesn’t mean the connection isn’t hairpinning through your router via it’s public IP (as in, the traffic never leaves your router, but still goes through it)yeah, traceroute might hint at that, if this is what is going on.
Looking at openspeedtests github page, this immediately sticks out to me:
Warning! If you run it behind a Reverse Proxy, you should increase the post-body content length to 35 megabytes.
/edit;
Decided to spin up this container and play with it a bit myself.
I just used my standard nginx proxy config which enables websockets and https, but I didn’t explicitly set the max_body_size like their example does. I don’t really notice a difference in speed, switching between the proxy and a direct connection.
So, That may be a bit of a red herring.
Also, proxy_buffering
NAT can be expensive. Its relaying through your gateway.
Is there typically such a massive difference between using http://local_ip:3000 and https://speed.mydomain.com?
Only if they resolve to different addresses.