tmtvl 5 days ago

> Network services are defined by the presence, in a directory watched by listen, of executable files whose name is of the form <protocol><port>.

That seems a bit silly to me as it means something like rot1376 is tricky to parse correctly. I'd think it would make sense for the protocol to need to be separated from the port by a non-numeric character, like a hyphen. Like that it would be trivial to parse: just grab all the numeric characters from the end until we find the non-numeric character.

  • msk-lywenn a day ago

    Rot13 is not a protocol. If you want to make a rot13 over tcp on port 13, your executable would be named tcp13. I thought the same mistake at first too.

  • PeterWhittaker a day ago

    Honesty, that’s trivial to parse, even if you don’t want to use or don’t have regex available: In C, e.g., just treat the name as a string and walk backwards from the last character until you find a non-digit; adjust for indexing, and you now know where the protocol ends and port begins.

    • dpassens a day ago

      I believe GP meant that you can't have protocols containing numbers, using rot13 as the example protocol on port 76. Though, as msk-lywenn said, rot13 isn't a real protocol, which don't really have numbers in their names.

      • KerrAvon a day ago

        rotthirteen76 would work

        • greyfade 19 hours ago

          rotthirteen would "work", but it's not a protocol.

          Actual protocols include tcp, udp, dccp, sctp, etc.

    • mecsred a day ago

      That explicitly wouldn't work in the example you're responding to

  • phlip9 20 hours ago

    Exactly, it should use a separator. Consider a more realistic example, like http280 or h3443. Totally ambiguous.

  • linschn 19 hours ago

    True, and noted, but realistically, the only implemented protocol now is tcp, and the next one will be udp.

    Historically, Plan 9 had IL as well.

    Which transport protocol has a number in its name ?

    If I implement one someday I'll add a separator.

aidenn0 5 days ago

Inetd style services are great, but do have an issue with programs that have a lot of startup overhead (e.g. things written in Python). I always thought the FastCGI approach was a neat one, with a defined protocol for what would otherwise start a new process, and the managing daemon can choose if/when to start and stop the process. It certainly makes the listening daemon significantly more complicated, and the actual launched program slightly more complicated, but IMO there are real benefits there.

  • somat a day ago

    That specific aspect(a persistent service worker) of fastcgi works well. but I have yet to find a compelling argument for fastcgi the protocol. let me explain.

    cgi as a protocol, really a calling convention for launching processes, make sense, it fits a specific niche in the ecosystem. fastcgi does not, fastcgi is a different incompatible http, that is, fastcgi does nothing that http does better. Did we all just collectively loose our ability to think critically, locked on to the fact that the common usage was that a http server launches a cgi process and when the time came to make that process it's own service, said we need cgi for services, creating fastcgi, forgetting that http already works just fine as a service?

    I am not really a back-end programmer, I am a sys-admin who sometimes makes web-based tooling. it is very possible there is a subtlety to this I missed. But I was a lot happier when I gave up on fastcgi and just made each service a http server with a reverse proxy in front to dispatch the requests.

    • aidenn0 3 hours ago

      I agree completely that writing a FastCGI client is almost as hard as writing an HTTP client, so having a different protocol is gratuitous. See mongrel2 and pushpin as things that can connect to long-running services that speak HTTP over zmq. Neither will manage the worker processes the way Apache could with fastcgi though.

  • graemep a day ago

    The FastCGI approach is not a million miles away from how (at least some) serverless services work AFAIK.

    > do have an issue with programs that have a lot of startup overhead (e.g. things written in Python

    Unless you have a lot of startups its probably not a problem, I would have thought.

    • somat a day ago

      The cgi model(also inetd) starts a new process on every single request.

      If you wrote your service in C like god intended(sarcasm) this is not a problem, unix systems are traditionally by design very good at starting processes.

      However python(my favorite language for what it's worth) has a lot of baggage it needs to sort out when it starts. so python specifically and any interpreted language that takes more than a few milliseconds to start in general starts to suffer under heavy loads in the one process per request model.

      Thus the motivation to make it one process for many requests.

      • rakoo 20 hours ago

        > starts to suffer under heavy loads

        The figures from 1990 are not the same as the figures from 2024. "heavy" here is so high it is not a realistic problem for 80% of sites.

      • linschn 19 hours ago

        I would be honestly surprised if any listen server ever experience heavy loads ;) this is more targeted at smolweb-scale hosts.

        On current hardware it can serve up to a few hundreds requests/s without too much trouble.

        There's also the trick of pre starting a pool of processes beforehand and handing the data to them when it comes. It is not implemented in listen yet, but would not be too hard to do.

        • aidenn0 2 hours ago

          > I would be honestly surprised if any listen server ever experience heavy loads ;) this is more targeted at smolweb-scale hosts.

          This was true of cgi scripts written in perl too, until they made the frontpage of slashdot.

zokier a day ago

the lack of access control in traditional bsd sockets has been a pain point and the concept of privileged port range is utterly useless, especially as you generally don't want to run your services as root anyways.

that being said, didn't selinux resolve that problem decades ago?

  • linschn 19 hours ago

    Also, indeee, the lack of access control for ports in bsd socket when the file API was RIGHT THERE is driving me crazy.

  • linschn 20 hours ago

    Author here :)

    I'm not aware of how selinux can solve this but I will look into it if only just to mention it as an alternative.

    • zokier 19 hours ago

      the typical way to allow something to bind to specific ports in selinux would be something like

         allow foo_t http_port_t : tcp_socket name_bind ; 
      
      the biggest problems are that you need to a) confine your users b) label everything