In many applications, the ability to dynamically load and execute third-party code (in the form of plug-ins) is highly desirable. Plug-ins can be used to provide:

  • Alternative implementations of built-in features
  • Completely new features (given some kind of framework on which they can be built)
  • All functionality in the application (e.g. in a test harness, compiler or other modular system)

There are a number of mechanisms within the .NET Framework to facilitate plug-in code:

Reflection enables assemblies to be dynamically loaded (removing compile-time references), and to iterate through the types in an assembly. To build a plug-in system based entirely on reflection, however, would be limiting, unreliable and very slow. The overheads involved in calling methods via reflection are high. Also, in the absence of a compile-time reference to an object, you lack the ability to verify whether the object contains the method/property you’re trying to access.

Interfaces are one of the key primitives in object orientated programming. They allow you to define the public methods, events and properties of a type without specifying how it should be implemented; or indeed, how it should behave. You can toss around a reference to an object using only its interface type, and still be able to do everything with the object that you could do with a concrete class (save for instantiating it, of course).

An obvious way to implement a plug-in system, therefore, would be to define a set of interfaces that were common to both the application and its plug-ins, implement them in the plug-ins and load them into the application using reflection. The advantage of this approach is that you have a contract at compile-time against which you can guarantee that the methods/properties you’re accessing exist on the object you’ve loaded.

There is quite a significant downside to this approach, however: You are loading third-party, potentially malicious code directly into your application’s memory space. The plug-in code could use reflection to access and manipulate everything in your application, not to mention crash it. It’s not hard to see why this would be a bad idea.

Application domains are an important (though perhaps not widely-understood) part of the .NET Framework. The vast majority of applications will only ever use a single AppDomain but, when utilised, they can be very powerful. Application domains sit at a high level in the runtime, providing a memory space into which the code you reference and execute is loaded. The only way to access managed objects from outside their AppDomain is to serialise them (a process over which you can exercise a lot of control) or to marshal them via remoting. An AppDomain can be secured to prevent another AppDomain from seeing inside it, loading assemblies and reflecting types. It can also raise and handle its own exceptions, keeping it isolated from the main process. This is definitely a solid foundation for a plug-in system.

It makes a lot of sense to load any plug-in code into a separate AppDomain, then ferry objects between the two domains in a controlled, sandboxed fashion. A general rule to consider when writing plug-in code is:

  • Use binary serialisation (the Serializable attribute or the ISerializable interface) when passing data between two application domains. Only ever pass an instance of a type that is common to both domains; e.g. a standard .NET type or a type defined in an assembly that is referenced by both domains. Any operations performed on serialisable types will run in the AppDomain that calls them (i.e. the main application).
  • Use remoting (handled transparently by MarshalByRefObject) when calling methods or raising events. Operations performed on remotable types will run in the AppDomain in which they are instantiated (i.e. the plug-in domain).

Why Not WCF?

At this point, you might well ask, “Why not use WCF as the basis for a plug-in system?”. It’s true, some developers do advocate this practice, but I personally do not. WCF imposes a number of show-stopping limitations on plug-in code:

  • WCF handles events poorly, requiring callback interfaces to be manually defined and wired up. Remoting handles events transparently.
  • Operations on WCF services can only exchange serialisable data. You can’t, for example, pass a remotable object to a WCF service to enable two-way communication.
  • WCF is primarily designed to be stateless. Plug-in code is almost always stateful. Although WCF handles sessions and concurrency, these can be difficult to use.
  • WCF uses XML serialisation based on public members of a type. Remoting uses binary serialisation and has the necessary permissions to access private members of a type.
  • WCF is optimised for interprocess and network-based communication, not communication between two application domains within the same process.

And, of course, it’s worthwhile to note that WCF itself is built on top of .NET Remoting; it is under no threat of deprecation, as it is a fundamental part of the framework.

More About MarshalByRefObject and Remoting

MarshalByRefObject is essential to any non-trivial cross-AppDomain functionality. It is handled specially by the .NET Framework; all you have to do is inherit from MarshalByRefObject and the framework will generate a transparent proxy for your class, automatically marshalling calls between the application domains for you.

Some important things to remember about writing classes that extend MarshalByRefObject:

  • Any objects you pass-to or return-from a MarshalByRefObject must be serialisable, or themselves a MarshalByRefObject.
  • You must remember to mark any types you derive from Exception or EventArgs with the [Serializable] attribute.
  • IEnumerable sequences created using the yield statement or LINQ cannot cross an application domain (because the compiler does not mark them as serialisable). Don’t return sequences from a MarshalByRefObject; instead, copy the elements into a collection (or use the ToList() method) and return the collection.
  • If you pass a delegate to a MarshalByRefObject – and the method it points to is in a different AppDomain – you must ensure that the method belongs to a MarshalByRefObject as well. Do not pass delegates to static methods, because they will be called on the wrong AppDomain (since there is no object instance to marshal the call to).

There are some other common types that can’t cross application domain boundaries:

  • DataObject (used for drag-and-drop, clipboard and other OLE functionality) is neither MarshalByRefObject or serialisable. You can either extract the data and pass it directly to the other AppDomain, or create a wrapper that implements IDataObject and inherits from MarshalByRefObject.
  • Image/Bitmap, although marked as serialisable, may not cross AppDomain boundaries. You should pass a Stream or byte[] containing the image data instead.

Lifetime Services and ISponsor

To further complicate matters, remoting uses lifetime services (rather than ordinary generational garbage collection) to determine when instances of a MarshalByRefObject should be cleaned up. By default, you have a window of 5 minutes in which to use an object obtained from a foreign AppDomain before the proxy becomes disconnected and an exception is thrown upon access. This is a necessary evil, because the garbage collector can’t count references to a remotable object inside a different application domain; that would break the isolation provided by application domains in the first place.

There are two methods to get around this, however:

  • Override the InitializeLifetimeService() method and return a null reference. This instructs remoting not to clean up instances of your object in another AppDomain. This has the potential to create memory leaks, so you can really only use this technique for singleton classes.
  • Obtain the lifetime service object (ILease) from the MarshalByRefObject using the RemotingServices class and register an ISponsor object to keep the instance alive.

Sponsorship works by renewing the lease on a MarshalByRefObject; it does this by returning a TimeSpan indicating how much longer the object is needed. Remoting will periodically call the Renewal() method on an ISponsor object until it returns a timespan of zero, or the sponsor is unregistered.

// register a sponsor
object lifetimeService = RemotingServices.GetLifetimeService(myMarshalByRefObject);
if (lifetimeService is ILease) {
    ILease lease = (ILease)lifetimeService;

// unregister a sponsor
object lifetimeService = RemotingServices.GetLifetimeService(myMarshalByRefObject);
if (lifetimeService is ILease) {
    ILease lease = (ILease)lifetimeService;

In practice, what this means is that you should hold a reference to a sponsor for any MarshalByRefObject you obtain from another AppDomain for as long as you need to access the object. When the sponsor object becomes eligible for garbage collection, it will also take out the remotable object which it sponsors. Ideally, implementations of ISponsor should be serialisable.

In my implementation of a plug-in system, I created a convenient generic class, Sponsor<TInterface>, which is simultaneously responsible for registering/unregistering a sponsor, accessing the remotable object itself and providing the renewal logic. You hold a reference to the sponsor object in your class, then call its Dispose() method when the remotable object is no longer needed. My plug-in system centers around the Sponsor class; ensuring that objects from the plug-in AppDomain are always wrapped in a Sponsor instance and never returned directly to user code without one.

Design for a Plug-In System

As I have alluded to, a plug-in system based on reflection, interfaces, remoting and sponsors is built around two application domains. The main AppDomain uses the PluginHost class to create the plug-in AppDomain and remotely instantiate PluginLoader, the class that loads plug-ins and instantiates remotable objects:

// create another AppDomain for loading the plug-ins
AppDomainSetup setup = new AppDomainSetup();
setup.ApplicationBase = Path.GetDirectoryName(typeof(PluginHost).Assembly.Location);

// plug-ins are isolated on the file system as well as the AppDomain
setup.PrivateBinPath = @"%PATH_TO_BINARIES%\Plugins";

setup.DisallowApplicationBaseProbing = false;
setup.DisallowBindingRedirects = false;

AppDomain domain = AppDomain.CreateDomain("Plugin AppDomain", null, setup);

// instantiate PluginLoader in the other AppDomain
PluginLoader loader = (PluginLoader)domain.CreateInstanceAndUnwrap(

// since Sandbox was loaded from another AppDomain, we must sponsor
// it for as long as we need it
Sponsor<PluginLoader> sponsor = new Sponsor<PluginLoader>(loader);

PluginLoader dynamically loads the plug-in assemblies (located in a subdirectory) into the plug-in AppDomain:

foreach (string dllFile in Directory.GetFiles(pluginPath, "*.dll")) {
    Assembly asm = Assembly.LoadFile(dllFile);

PluginLoader keeps a cache of ConstructorInfo objects for each interface implementation it discovers, so it can quickly instantiate objects. It exposes GetImplementations (returns IEnumerable<TInterface>) and GetImplementation (returns the first implementation of TInterface).

private IEnumerable<ConstructorInfo> GetConstructors<TInterface>() {
    if (ConstructorCache.ContainsKey(typeof(TInterface))) {
        return ConstructorCache[typeof(TInterface)];
    else {
        LinkedList<ConstructorInfo> constructors = new LinkedList<ConstructorInfo>();

        foreach (Assembly asm in Assemblies) {
            foreach (Type type in asm.GetTypes()) {
                if (type.IsClass && !type.IsAbstract) {
                    if (type.GetInterfaces().Contains(typeof(TInterface))) {
                        ConstructorInfo constructor = type.GetConstructor(Type.EmptyTypes);

        ConstructorCache[typeof(TInterface)] = constructors;
        return constructors;

private TInterface CreateInstance<TInterface>(ConstructorInfo constructor) {
    return (TInterface)constructor.Invoke(null);

public IEnumerable<TInterface> GetImplementations<TInterface>() {
    LinkedList<TInterface> instances = new LinkedList<TInterface>();

    foreach (ConstructorInfo constructor in GetConstructors<TInterface>()) {

    return instances;

PluginHost calls the GetImplementation/GetImplementations methods on PluginLoader to return transparent proxies to the remotable objects instantiated from the plug-ins. It wraps them in a Sponsor instance and returns them to the user. PluginHost also handles reloading/unloading of the AppDomain.

Putting It All Together

The general usage pattern for my plug-in system would be:

  1. Create a series of interfaces and place them in a common assembly.
  2. Create one or more plug-in assemblies containing types that implement the interfaces.
  3. Create an application which references only the common assembly.
  4. Instantiate PluginHost, passing the path to load plug-ins from.
  5. Call the LoadPlugins() method and check for success.
  6. Instantiate implementations of the plug-in interfaces using the GetImplementations() or GetImplementation() methods.
  7. Keep a reference to the Sponsor<TInterface> object returned from the above methods until the object is no longer required.
  8. Unload the plug-in AppDomain by calling Dispose() on the PluginHost object.

You can see an example of this in the included example project.

Final Words

Nobody can deny that loading third-party code in a separate application domain is regarded as best practice. Hopefully, this task is greatly simplified through the use of the plug-in system i’ve provided. It is, of course, simply a proof of concept implementation. Other things you might want to consider would be:

  • Applying security to the plug-in AppDomain to further sandbox the environment.
  • Filtering the plug-in assemblies loaded; either according to digital signatures, implementation of marker interfaces or particular metadata.
  • Making metadata about the plug-ins available to calling code.
  • Handling exceptions more gracefully.

In any event, I hope it demonstrates the basic idea behind cross-AppDomain programming in .NET.

Download (Visual Studio 2010 solution, zipped)

David Anderson recently responded to a question I asked on stackoverflow regarding how to access binary resources in a .resx file as a Stream rather than a byte[] (which ResourceManager exposes by default). It goes into detail about how ResourceManager always copies the data into memory, and how to use manifest resource streams to overcome this problem:

Getting a stream to a resource without copying the contents into memory on David Anderson’s blog


I’m currently doing some work in the area of cross-AppDomain development, where all objects are either marshalled (transparently) using .NET Remoting or serialised. It turns out that, when DataSet objects are serialised, their extended properties are serialised as strings. This means that, when operating on a DataSet which has come from another AppDomain, the complex data type you added as an extended property has unexpectedly become a string representation of it.

There’s a quick and easy solution to this problem; manually serialising extended properties using the built-in .NET XML serialisation mechanism. However, not all .NET types are fully supported by the XML serialiser, as I discovered while trying to serialise an object with a property of type System.Drawing.Color. Here’s a quick and easy way to work around this issue:


The solution is two-fold; firstly, we should exclude the Color property from XML serialisation (using the XmlIgnore attribute), because we know it’s not supported:

public Color TextColor {

Next, we need to provide an additional public read/write property that is XML-serialisable:

public string TextColorHtml {
    get { return ColorTranslator.ToHtml(TextColor); }
    set { TextColor = ColorTranslator.FromHtml(value); }

The property needs to be public and provide a getter and setter, because the XML serialiser will invoke it, and does not execute with high enough privileges to be able to read into private/protected/internal properties or set the values of backing variables. I’ve chosen to serialise the HTML representation of the colour, because it is already XML-friendly and translates back and forth easily.

The advantage to using this approach is that you do not have to change the representation used by your class; the Color structure is very useful and very pervasive in .NET. Also, since the additional property has no backing variable (it is entirely procedural), no additional memory is needed to store each instance of the type.

From time to time, it’s a good idea to go back and re-examine old code. With time comes greater knowledge and new tricks; and in the case of two of my earlier custom Windows Forms controls, this was certainly true. You may remember my ComboBox control with grouping support, as well as my ComboBox with a TreeView drop-down – I have revisited these controls, with particular mind to implementing what I learnt from my experience with the Buffered Paint API.

New versions of GroupedComboBox and ComboTreeBoxGroupedComboBox

To refresh your memory, this control extends ComboBox and adds a new property; GroupMember. Using PropertyDescriptor and owner-drawing, it groups together items whose grouping values are equal.

What’s new:

  • On Windows Vista and 7, the control did not appear in the ‘button style’ consistent with normal ComboBox controls in DropDownList mode; this is a limitation of owner-drawing, but one that can be overcome through the use of VisualStyleRenderer. This is a slightly dodgy hack in that it pre-supposes the appearance of the ComboBox in this mode, but the functionality is only applied when the OS is identified as Vista or 7.
  • The control is inconsistent with other ComboBox controls because it is not animated. With the BufferedPainter<T> class I developed, implementing buffered animation was simple.


The ComboTreeBox is a control developed entirely from scratch (i.e. its base class is Control) that uses a hierarchical/tree structure as its data representation (instead of a flat list). As such, it has behaviours in common with both ComboBox and TreeView. The drop-down/pop-up portion of the control is implemented using ToolStripDropDown, bitmap caching and custom rendering.

What’s new:

  • As with GroupedComboBox, the control did not appear in the Windows Vista/7 style. As the control has no editable portion and is entirely owner-drawn, it was easy to simulate the appearance under the appropriate OS. Under XP, it falls back to the default visual style. When themes are disabled, it draws in the classic style.
  • The control was not animated. Once again, BufferedPainter<T> was used to implement animation.
  • Due to a strongly-typed reference to its designer class, the control previously did not work with the “Client Profile” version of the .NET Framework. Rather cheekily, I have substituted my custom designer for the built-in designer that the DomainUpDown control uses – this works well for constraining the height of the control and avoids the need to reference the System.Design assembly (included only with the full version of the framework).
  • Minor tweaks were made to the way focused items are drawn; this is only apparent if focus cues are enabled in Windows.


Please visit the project page here: Drop-Down Controls

The ABA file format, sometimes called Direct Entry or Cemtext, is an unpublished de facto standard for bulk electronic payments. It is used by all major Australian financial institutions to interface with business accounting systems, and is supported by many popular accounting packages such as MYOB. For small-to-medium businesses, the most common usage for an ABA file is to upload it via their Internet banking portal – to avoid manually entering transaction details into the web application. As the name suggests, the format is dictated by the Australian Banking Association, and also has ties with the Australian Payments Clearing Association (APCA).

I was recently tasked with writing an exporter from a proprietary, relational database system to the ABA format. Since the specifications for the file format are not in the public domain (at least, not from an official source), I had to rely on some third-party sources to arrive at a solution. In this article, i’ll share those resources with you, as well as providing my own explanation of some of the finer points of the standard – to help you avoid common pitfalls when working with the format.

Overview of the ABA format

The ABA format is somewhat archaic; certainly based on outdated technology. It’s a plain-text format that uses fixed-width fields (most of which are severely constrained, to the point that most data going into them must be truncated or abbreviated in order to fit). It is not supposed to be human-readable, although a trained eye can interpret it. Its values are untyped and there is nothing to validate the correctness of an ABA file if you are not privy to the rules.

An ABA file consists of:

  • One ‘type 0’ record (i.e. line) which describes and provides instructions processing instructions for the batch
  • One or more ‘type 1’ records which provide the detail for each payment in the batch
  • One ‘type 7’ record which confirms the size and totals for the batch

(i.e. an ABA file is at least 3 lines long.)

Within each record/line in the file, and depending on the record type, there are a number of fixed-width fields which must be populated. Some are padded with zeroes, others with spaces. Some occupy the left portion of the field, others (like numeric fields) are right-justified. A complete line must consist of exactly 120 characters (excluding the CR/LF terminator). It’s helpful to write a function to perform the necessary padding/truncation and justification:

// requires a reference to System.Linq
string ToFixedWidth(string input, int length, char padding = ' ', bool rightJustify = false) {
	char[] output = new char[length];
	if (rightJustify) input = new string(input.Reverse().ToArray());

	for (int i = 0; i < length; i++) {
		if (i < input.Length)
			output[i] = input[i];
			output[i] = padding;

	if (rightJustify) output = output.Reverse().ToArray();
	return new string(output);

// to pad a string out to 18 characters, left-justified and padded with spaces:
ToFixedWidth(str, 18);

// to pad a decimal number out to 10 characters, right-justified and padded with zeroes:
ToFixedWidth((num * 100).ToString("0000000000"), 10, '0', true);

Some additional notes:

  • Most examples i’ve seen capitalise all text fields, however this is not necessary. Some financial institutions will retain the casing of your narrations/annotations, if only on the sender’s bank statement.
  • All dollar amounts are expressed in whole cents, with no digit grouping. Multiply your decimal figures by 100 and use an appropriate number format string (e.g. “0000000000”).

Resources for the file format specification

The following are two excellent resources for the specification of the ABA file format itself:

Between these two documents, you should have all the information you need to construct the ABA records themselves. There is some inherent ambiguity in these, however, which is why i’d like to add the following notes:

Descriptive record

Reel sequence number

There is a maximum number of payments you can include per ABA file, if only because the dollar amount and number of records fields are of a fixed length. If you need to split your batch into multiple files, the reel sequence number identifies each one. However, in the vast majority of cases, you will only have one file, numbered 01. You do not have to keep track of which numbers you have used previously.

User Preferred Specification

This is one of two pieces of information that identify the user and their financial institution. Some banks will dictate the value for this field, others are less restrictive and will permit any form of the name on the user’s bank account. It’s best to make the value for this field user-maintainable.

User Identification Number

Contrary to its description, this number identifies the financial institution, not the user. Each FI has its own number (or possibly several, depending on the size of the bank) which is issued by APCA. This number does not change between batches; it’s tied to the bank and, potentially, the account. Most banks document this number somewhere in their internet banking portal instructions.

Detail records


As far as I can tell, this field can be used to indicate that a particular record represents a change to a prior record in the batch, or to existing details held on file for the payee. It’s also clear that not every financial institution pays attention to this field. Unless you are dealing with thousands of payments at a time, you should leave this field blank to make it clear that each record represents a separate payment.

Transaction code

In most cases, you should use the code 50, which represents a non-specific credit to the bank account. For payroll transactions, use the code 53. The other transaction codes appear to be of relevance to the ATO and superannuation funds only.

Lodgement reference

This is the text that appears on the payee’s bank statement.

Trace record (BSB and Account No)

This is the BSB and Account Number of your bank account.

Name of remitter

If your bank supports it, this lets you track the name of the person in your organisation who authorised the payment.

Final words

Hopefully, this points you in the direction of the right resources and helps to dispel some of the confusion surrounding the ABA file format. By following the specification exactly, you will produce a file that can be read by all major Australian banks and has the potential to save time and reduce errors when processing large numbers of payments.

See also

ABA/Cemtext Validator – An ABA file validator and SDK based on this article