Hello people, I recently rented a vps server from OVH and I want to start hosting my own piefed instance and a couple other services. I am running debian 13 with docker, and I have nginx proxy manager almost set up. I want to set up subdomains so when I do social.my.domain it will go to my piefed instance, but how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain? Can I use nginx/docker natively for that or do I have to install another program. Thanks for the advice.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    14 hours ago

    It’s called a Reverse Proxy. The most popular options are going to be Nginx, Caddy, Traefik, Apache (kinda dated, but easy to manage), or HAProxy if you’re just doing containers.

    • cecilkorik@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      13 hours ago

      FWIW I don’t find Apache dated at all. It’s mature software, yes, but it’s also incredibly powerful and flexible, and regularly updated and improved. It’s probably not the fastest by any benchmark, but it was never intended to be (and for self-hosting, it doesn’t need to be). It’s an “everything and the kitchen sink” web server, and I don’t think that’s always the wrong choice. Personally, I find Apache’s litlte-known and perhaps misleadingly named Managed Domains (mod_md/MDomain) by far the easiest and clearest way to automatically manage and maintain SSL certificates, it’s really nice and worth looking into if you use Apache and are using any other solution for certificate renewal.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 hours ago

        I’ll be honest with you here, Nginx kind of ate httpd’s lunch 15 years ago, and with food reason.

        It’s not that httpd is “bad”, or not useful, or anything like that. It’s that it’s not as efficient and fast.

        The Apache DID try to address this awhile back, but it was too late. All the better features of nginx just kinda did httpd in IMO.

        Apache is fine, it’s easy to learn, there’s a ton of docs around for it, but a massively diminished userbase, meaning less up to date information for new users to find in forums in the like.

        • Black616Angel@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 hours ago

          Apache has the better open source tooling IMO.

          I use both, but at work I prefer apache simply for its relative ease of setting up our SSO solution. There is probably a tool for that in nginx as well, but its either proprietary or hard to find (and I did try to find it, but setting up and learning apache and then SSO was actually easier for me).

    • kumi@feddit.online
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      HAProxy if you’re just doing containers

      What makes you say that? From my experience、HAProxy a very competent, flexible, performant and scalable general proxy. It was already established when Docker came on the scene. The more container-oriented would be Traefik (or Envoy).

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        HAProxy is not meant for complex routing or handling of endpoints. It’s a simple service for Load Balancing or proxying alone. All the others have better features otherwise.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 hours ago

            For starters: Rails, PHP, and passthrough routing stacks like message handlers and anything that expects socket handling. It’s just not built for that, OR session management for such things if whatever it’s talking to isn’t doing so.

            It seems like you think I’m talking smack about HAProxy, but you don’t understand it’s real origin or strengths and assume it can do anything.

            It can’t. Neither can any of the other services I mentioned.

            Chill out, kid.

            • kumi@feddit.online
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              8 hours ago

              One related story: I did have the arguable pleasure to operate a stateful Websockets/HTTP2-heavy horizontally scaled “microservice” API with Rails and even more Ruby, as well as gRPC written in other stuff. Pinning of instances based on auth headers and sessions, weighting based on subpaths, stuff like that. It was originally deployed with Traefik. When it went from “beta” stage to having to handle heavier traffic consistently and reliably on the public internet, Traefik did not cut it anymore and after a few rounds of evaluation we settled on HAProxy, which was never regretted IIRC. My friends company had it in front of one of the countries busiest online services at the time, a pipeline largely built in PHP. Fronted with haproxy. I have seen similar patterns patterns play out at other times in other places.

              Outside of $work I’ve had them all running side by side or layered (should consolidate some but ain’t nobody got time for that) over 5+ years so I think I have a decent feel for their differences.

              I’m not saying HAProxy is perfect, always the best pick, has the most features, or without tradeoffs. It does take a lot more upfront learning and tweaking to get what you need from it. But I can’t square your claims with lived experience, especially when you specifically contrast it with Traefik, which I would say is easy to get started with, has popular first-class support for containers, and loved by small teams - but breaks at scale and when you hit more advanced use-cases.

              Not that any of the things either of us have mentioned so far is releveant whatsoever for a budding homelabber asking how to do domain-based http routing.

              I think you are just baiting now.

  • voodooattack@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 hours ago

    If your goal is ease of use and scaling complexity along with your experience, and you’re planning to use Docker like you mentioned, then I recommend Traefic: https://doc.traefik.io/traefik/

    If not, then I recommend Caddy or nginx.

    Edit: ducking autocorrect changed “of” to “if”

    The irony is delicious

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    13 hours ago

    how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain

    You don’t send traffic to domains. You point all the domains to one host, and on that host, set up a reverse proxy like nginx, caddy, or traefik, and then configure HTTP routing rules. That proxy can run in docker. I use traefik and it does all the routing automatically once I add labels to my docker-compose file.

  • kumi@feddit.online
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    The right nginx config will do this. Since you already have Nginx Proxy Manager, you shouldn’t need to introduce another proxy in the middle just for this.

    Most beginners find Caddy a lot easier to learn and configure compared to Nginx, BTW.

    Another thing that I rarely see mentioned is that since SNI (domain name) is unencrypted for https (unless ECH, which is still not common), you can proxy and route https requests based on domain without terminating TLS or involving http at all. sniproxy is a proxy for just that and available in debian repos. If all you really need is passing through request to downstream proxies or a service terminating TLS itself, it works nicely.

    https://github.com/ameshkov/sniproxy

  • nutbutter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    12
    ·
    14 hours ago

    In your DNS settings, from your domain provider, add all the A and AAAA records for the sub domains you want to use. So, when someone hits the port 443 using one of those domains, your Nginx Proxy Manager will decide which service to show to the client based on the domain.

    how do I tell the machine to send piefed traffic to this subdomain

    Configure your Nginx Proxy Manager. It should be using port 80 for HTTP, port 443 for HTTPS and another port for its WebUI (8081 is default, iirc).

    So, if I type piefed.yourdomain.com in my address bar, the DNS tells my browser your IP, my browser hits your VPS on port 443, then Nginx Proxy Manager automatically sees that the user is requesting piefed, and will show me piefed.

    For the SSL certificates, you can either generate a new certificate for every subdomain, or use a wild card certificate which can work on all subdomains.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    NAT Network Address Translation
    SSL Secure Sockets Layer, for transparent encryption
    SSO Single Sign-On
    TLS Transport Layer Security, supersedes SSL
    VPS Virtual Private Server (opposed to shared hosting)
    nginx Popular HTTP server

    10 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #1001 for this comm, first seen 14th Jan 2026, 02:55] [FAQ] [Full list] [Contact] [Source code]

  • deadcade@lemmy.deadca.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    14 hours ago

    The job of a reverse proxy like nginx is exactly this. Take traffic coming from one source (usually port 443 HTTPS) and forward it somewhere else based on things like the (sub)domain. A HTTPS reverse proxy often also forwards the traffic as HTTP on the local machine, so the software running the service doesn’t have to worry about ssl.

    Be sure to get yourself a firewall on that machine. VPSes are usually directly connected to the internet without NAT in between. If you don’t have a firewall, all internal services will be accessible, stuff like databases or the internal ports of the services you host.

    • kossa@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      all internal services will be accessible

      What? Only when they are configured to listen on outside interfaces. Which, granted, they often are in default configuration, but when OP uses Docker on that host, chances are kinda slim that they run some rando unconfigured database directly. Which still would be password or authentication protected in default config.

      I mean, it is never wrong slapping a firewall onto something, I guess. But OTOH those “all services will be exposed and evil haxxors will take you over” is also a disservice.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        I’ve seen many default docker-compose configurations provided by server software that expose the ports of stuff like databases by default (which exposes it on all host interfaces). Even outside docker, a lot of software, has a default configuration of “listen on all interfaces”.

        I’m also not saying “evil haxxors will take you over”. It’s not the end of the world to have a service requiring authentication exposed to the internet, but it’s much better to only expose what should be public.

        • kossa@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 hours ago

          Yep, fair. Those docker-composes which just forward the ports to the host on all interfaces should burn. At least they should make them 127.0.0.1 forwards, I agree.

          • kumi@feddit.online
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 hours ago

            I’m guilty of a few of these and sorry not sorry but this is not changing.

            Often these are written with local dev and testing in mind, and in any case the expectation is that self-hosters will look through them and probably customize them - and in any case be responsble for their own firewalls and proxies - before deploying them to a public-facing server. Larger deployments sometimes have internal load balancers on separate machines so even when reflecting a production deployment, exposing on 0.0.0.0 or running eith network=host might be normal.

            Never just run compose files for user services on a machine directly exposed to the internet.

    • a_person@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      What service would you recommenced for firewall. The firewall I use on my laptop is ufw, should I use that on the vps or is their a different service that works better?

      • K3CAN@lemmy.radio
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        ufw is just a fancy frontend for iptables, but hasn’t been updated for nftables, yet.

        Firewalld is an option that supports both, and if you happen to be running cockpit as well, the cockpit-firewall plugin provides a simple GUI for the whole thing.

      • kumi@feddit.online
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 hours ago

        Firewalld

        sudo apt-get install firewalld  
        systemctl enable --now firewalld # ssh on port 22 opened but otherwise most things blocked by default  
        firewall-cmd --get-active-zones  
        firewall-cmd --info-zone=public  
        firewall-cmd --zone=public --add-port=1234/tcp  
        firewall-cmd --runtime-to-permanent  
        

        There are some decent guides online. Also take a look in /etc/firewalld/firewalld.conf and see if you want to change anything. Pay attention to the part about Docker.

        You need to know about zones, ports, and interfaces for the basics. Services are optional. Policies are more advanced.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 hours ago

        UFW works well, and is easy to configure. UFW is a great option if you don’t need the flexibility (and insane complexity) that manually managing iptables rules offers,

        • kumi@feddit.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 hours ago

          The main problem with UFW, besides being based on legacy iptables (instead of the modern nftables which is easier to learn and manage), is the config format. Keeping track of your changes over track is hard, and even with tools like ansible it easily becomes a mess where things can fall out of sync with what you expect.

          Unless you need iptables for some legacy system or have a weird fetish for it, nobody needs to learn iptables today. On modern Linux systems, iptables isn’t a kernel module anymore but a CLI shim that actually interacts with the nft backend.

          Misconfigured UFW resulting in getting pwned is very common. For example, with default settings, Docker will bypass UFW completely for incoming traffic.

          I strongly recommend firewalld, or rawdogging nftables, instead of ufw.

          There used to be limitations with firewalld but policies maturing and replacing the deprecated “direct” rules together with other general improvements has made it a good default choice by now.

  • Foofighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 hours ago

    I’m not using socker myself but npm and other services in proxmox containers and VMs. The concept is the same though.

    NPM allows you to define a host, which needs to be the subdomain name, allows NPM to know how to handle and serve requests to said domain. In you case this would be the full social.my.domain. Additionally you need to set the local ip /port of the service you’re hosting. You can also use a local host name, which makes it easier to move services to other ips, which probably doesn’t happen often.

    Finally HTTPs, SSL, TLS should be configured. This can be tricky if you don’t have specific instructions but should not be neglected!