What IPC method should I use between Firefox extension and C# code running on the same machine?

I have a question about how to structure communication between a (new) Firefox extension and existing C# code.

The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code.

Some pertinent factors:

  • It's only running on windows, in a relatively controlled corporate environment.
  • I have a windows service running on the machine, built in C#.
  • Storing the data in a local datastore (like sqlite) would be useful for other reasons.
  • The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'.
  • The data exchange can be asynchronous for the most part if not completely.
  • As with all projects, I have limited resources so want an option that's relatively easy.
  • It doesn't have to be ultra-high performance, but shouldn't add significant overhead.
  • I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary)

Some options I'm considering:

  1. use an XPCOM to .NET/COM bridge
  2. use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service.
  3. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store.

My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http.

Any comments on these ideas or other suggestions?

UPDATE: I was planning on building the extension in javascript rather than c++


I would personally used named pipes to do the communication instead of sockets. They're very low overhead, and very reliable on Windows.

This is very easy to use from C++ and from C#.

  1. Use the first if you need any sort of RPC. Otherwise, you'll find yourself writing an RPC language, validations, construction/deconstruction, etc., just a little overboard for something on a local machine.

  2. Best option if you have a very passive plugin. The third component entirely decouples the two processes, which is great for a lot of things, including as mentioned above, async, testing, ease of implementation, etc.. Probably a silly idea if you want to do a lot of message passing.

  3. Probably the best option overall for most things. TCP/IP is nice from the standpoint of sending stuff across the internet, but you don't really want two different IP addresses or to mess around with setting up webservers and possible port conflicts. Pipes make more sense, or some other similar serial communication model. It decouples well, it can be entirely async (TCP/IP is async, normal HTTP is not, pipes are too), its very easy to test (assuming you don't have to write any of the protocol of course), and it couldn't care less about code base. Which means tomorrow, if your C# backend turns into say, a Ruby one, or a Python one, the entire thing still 'just works'. Its better than sqlite as well, since you don't have to worry about packaging an entire library and database with your plugin.

The only downsides to the third option, is (one) that things will be async but should be responsive and active, whereas sqlite allows things to not only be mostly passive, but it isn't phased by you shutting your computer down for a week. And (two) its not amazing for RPC either, if you want to do that again you end up inventing your own protocol or dealing with something like SOAP and WSDL.

Well if you're going to use JavaScript I don't see another way to use named pipes or other system dependant communication other than writing proxy component in C++ which will allow you to access OS API directly. On the other hand if you plan to use TCP/UDP for IPC it will be much easier for you, because Firefox provides socket services that you can use easily from JavaScript component.

If blocking is your concern you can use asynchrony socket communication or threading services to avoid locking of Firefox's GUI, but be aware that many objects are accessible only from Firefox's main thread.

The option I selected was #2: use a sqlite db. Main advantages being:

  • possible to implement in javascript
  • using a sqlite db useful for other reasons
  • asynchronous communication improves performance: C# code is able to cache all information the firefox extension requires rather than having to prepare it on-demand. FF extension is able to save all data back to the sqlite db rather than needing it handled immediately by C# code.
  • separate layer provides a nice testing-point, e.g. it's possible to run only the FF code and verify the expected results in sqlite, instead of needing a testing harness that operates across FF and C#.

Clearly some of these are scenario-dependant, so I would definitely not say this is the best general-purpose option for communication between FF extn and C# service.

UPDATE: We used just a sqlite db initially, and then wanted some synchronous communication so later exposed an http web service from the C# windows service that is called by the FF extension. This web service is now consumed by both the FF extension and other browser extensions. It's nice that it's a web service as it makes it easy to consume by different apps in different languages.

Need Your Help

transform json to work with ember-data


When visiting my test api it returns a json in the following format to my ember application: