0

I have a Windows service that I would like to extend and add a CLI to it similar to docker or Elastic Agent, so I have a question about what kind of approach I should use for the *.exe to communicate with the same *.exe running as a service?

  • Would you make the service an ASP.NET Core server and communicate with it over a REST-API or something else?
  • How would you distinguish between the *.exe being run as a service or a CLI? Would you use some CLI parameter for it or something else?
  • Are there other things I need to consider that I might not have in my mind yet?

3 Answers 3

7

This is a classic separation of concerns. You have one use-case for a process that runs as a headless Windows service performing tasks. You have a second use case for a CLI tool parsing command line flags and issuing commands. There's no reason for those two to share an executable. They're two distinct and different tools that happen to be part of your overall solution. For instance, if you find a bug in your CLI, there shouldn't be a need to stop and restart the service to roll out a new version of the EXE. Let those two components live separately.

The article that Phill W. links to shows different approaches for inter-process communication, but I think the important part isn't the transport (especially since you don't need high throughput, I'm guessing, for a control interface), but rather to think of the communications between those two components as a formal API, a control-plane API that your service exposes, and then have the CLI be just a client for that API.

This has the advantage of making you think of the requirements of your control API, make it decoupled from the transport, so making e.g. a web-based control UI in the future would be a separate client, just like the CLI client.

So my suggestion is:

  1. Design your control plane API.
  2. Serve your API using your favorite API hosting framework, e.g. ASP.NET Core, which is a great platform that lets you host using various transports (HTTP/S, Named Pipes, raw TCP, gRPC, etc) and gives you good tooling for auth and access control.
  3. Write your CLI as a standalone client calling that API.
5

What you're describing sounds an awful lot like most DBMSes.

For example, the MySQL Service process, mysqld, runs independently and does all the heavy lifting. It's Command Line Interface, mysql, provides Users with a way to feed commands (SQL) to that Service and to display the results.

Ditto SQL Server and Management Studio, Oracle Database and SqlDeveloper, and so on.

If the two processes run on the same machine, Our Friends in Redmond have published this handy summary of your options. I'd suggest taking a look at [Named] Pipes but Sockets is probably the Go-To solution, these days.

Remember that if you do allow remote access to your Service (i.e. from different servers), you're going to have to beef up security around it, deal with multiple, simultaneous connections and, potentially, things like Denial of Service issues. These are things that a Web Server would come with, built in, but you might have to consider "reinventing" these within your Service.

0

Yeah this seems like a IPC question. The crux of your question is "what is the best way to communicate between the CLI user interface, to the actual program".

Now before I delve into the possibilities I am obligated to ask if it is possible to simply attach a console window directly to said "Windows service". This seems like the simplest and most performant way to go about things. Obviously, I don't know much about your particular use case aside from the fact that you simply want a separate CLI process.

Would you make the service an ASP.NET Core server and communicate with it over a REST-API or something else?

That is certainly a possibility for IPC, and honestly I rather prefer this approach. Docker utilises a very similar approach with its docker API. The benefits of this is obviously extensibility and flexibility. Namely, users of your app can now create their own "CLIs" so to speak or control your service programatically. The downside is the overhead of both spinning up a HTTP server and the actual HTTP call overhead. Spinning off of this idea, you can simplify things and use a TCP server running on localhost, this shaves off some communication overhead.

Another possibility, which is more performant, is to use named pipes. The upside is that it is a lot more performant, in terms of "in-flight" time for comms. The downside is that it is a lot more tricky to work with, and can potentially backfire if you opt to use blocking IO.

The final, more obscure possibility is to use shared memory. On Windows this is possible via the File mapping API. I honestly do not recommend doing this unless you want to hide your communication from the OS for whatever reason (a lot of malware uses a similar technique to obfuscate their IPC comms). I left it here as a technical possibility.

How would you distinguish between the *.exe being run as a service or a CLI?

Not too sure if I follow, because I'm working off the assumption that these are two distinct binaries. This is probably the approach you want to take anyhow if you are going to proceed with this. Much better to decouple the user interface from the actual service.

Are there other things I need to consider that I might not have in my mind yet?

When it comes to developing, I encourage you to first forget about the technical details and draft up a rigorous "API contract", that is, a set of rules defining how the CLI (client) communicate with the service (server). This will help you choose the correct IPC implementation. For example, if you need to send a lot of large binary data across, then TCP might be superior. If there is a lot of parameters that needs to be passed around, then HTTP might be better suited. Once that is done, figure out how to serialise/deserialise data and go from there.

Not the answer you're looking for? Browse other questions tagged or ask your own question.