This is just a quick announcement that i’ve enabled comment voting on my blog. I often get interesting and thoughtful comments from people, and i’d like to be able to show my appreciation – and allow other users to do so as well. I’m not going to retrospectively vote on old comments, but from now on, i’ll be upvoting comments that are insightful and constructive. I encourage my readers to do so as well!

Animation of force-based tessellation algorithm

When I first blogged about force-based algorithms, my focus was firmly on charting and graphing; specifically, producing aesthetically-pleasing diagrams consisting of nodes and connectors. Force-based algorithms have many other applications in computer science, from graphics to simulations to problem-solving and beyond. In this post, i’m going to look at the application of a force-based algorithm to the task of arranging images in a tessellating pattern.

The forces in play

As with my force-directed diagram algorithm, this force-based algorithm requires at least two forces to be in play; if there is no opposing force, then the layout would either collapse or ‘fly away’ from the viewport. In this case, the dominant force is a pulling force that acts towards the middle of the screen; regardless of where an image tile is situated, it will be dragged towards the center point. In this way, the force is much like a localised gravitational field (although it does not strengthen as objects approach the point). The opposing force is that of collision; when one image tile would otherwise cross into the bounds of another, a reaction force is applied in the opposite direction (some of the energy is absorbed during the collision, making it only partially elastic).

Furthermore, when a collision occurs, image tiles are not permitted to overlap. Instead, one of three things will happen:

  • The tile will slide against the horizontal edge of the colliding tile (towards the center)
  • The tile will slide against the vertical edge of the colliding tile (towards the center)
  • The tile will remain in its original position

The choice of action will be determined according to whether the tile’s path is unobstructed.

Together, these forces encourage the image tiles to reach a stable configuration, in which they are clustered together with minimal gaps between them.

The algorithm

LET center_screen represent the middle of the screen
LET damping represent the factor by which velocity decreases between iterations
LET absorbance represent the factor by which energy is absorbed during a collision

INITIALISE the each image tile with a random (or uniformly-distributed) placement

    FOR EACH image tile:
        LET net_force equal zero
        LET pulling_force act at the angle formed between the image tile and center_screen

        SET net_force = net_force + pulling_force
        SET velocity{i} = (velocity{i-1} * damping) + net_force
        SET position{i} = position{i-1} + velocity{i}

        IF a collision occurs
            LET reaction_force act at the angle formed between the colliding image tiles

            SET net_force = (net_force * absorbance) + reaction_force
            SET velocity{i} = (velocity{i-1} * damping) + net_force

            LET proposed_horizontal represent the coordinate (position{i}.x, position{i-1}.y)
            LET proposed_vertical represent the coordinate (position{i-1}.x, position{i}.y)

            IF proposed_horizontal collides with no other image tile
                SET position{i} = proposed_horizontal
            ELSE IF proposed_vertical collides with no other image tile
                SET position{i} = proposed_vertical
                SET position{i} = position{i-1}
            END IF
        END IF
UNTIL (total displacement drops below threshold) OR (maximum iterations reached)

The force, velocity and position variables are represented using vectors. Collisions take into account the entire bounds of each tile (implemented using Rectangle.IntersectsWith() in .NET).

The initial placement of the image tiles is significant; they must be placed in such a manner that they do not overlap. The implementation offers two initial arrangement types; random (in which positions are completely randomised), or uniform (where image tiles are assigned to positions on a uniform grid).

The implementation

The algorithm is implemented in the ImageCollage class. Each instance of ImageCollage holds a collection of ImageInfo objects, which represent the image tiles.


  • Images – Collection of ImageInfo objects.
  • Arrange() – Runs the force-based algorithm.
  • Distribute() – Responsible for the initial placement of image tiles.
  • Render() – Renders the montage using the image files and resulting layout.


  • Path – Path to the image file.
  • OriginalWidth, OriginalHeight – Dimensions of the image, used to preserve the aspect ratio.
  • Bounds – The bounds of the image tile (will change during layout).
  • Velocity – The velocity of the image tile during the layout algorithm.
  • RelativeSize – Controls the size of the image tile with respect to the others on the canvas.

The full implementation can be found on the project page for the Image Collage Generator, where I have developed a complete Windows Forms application around this concept.

The results

The implementation of the force-based algorithm for image tessellation appears to work well. It copes with small numbers of images as well as larger counts (150+ images) without significant degradation in performance or the quality of layout. Further enhancements would be required to cope with very large number of images (500+) – this algorithm was designed with fewer number of images in mind.

In general, uniform initialisation seems to produce fewer gaps than random initialisation; however, the random positioning tends to produce more organic-looking layouts. Depending on the variation in aspect ratio between the images used, one approach may produce better results than the other.

Final words

Be sure to check out the Image Collage Generator to see this force-based algorithm in action. Once again, the force-based approach lends itself well to a task which would otherwise require manual placement. Genetic algorithms may also be an alternative to the force-based approach; tessellation is a problem for which fitness criteria can be clearly evaluated – however, I would wager than a genetic algorithm would take longer to find a solution and would be more computationally-intensive than the force-based approach.

I hope you find this application of force-based algorithms useful. Perhaps you can think of other problems that could be solved in this manner?

Okay, so batch scripting may be regarded as a bit of a computing dinosaur these days (particularly following the rise of PowerShell)… but I argue that it still has legitimacy on the developer/power-user’s workstation. Batch scripts are still well-suited to:

  • Startup scripts
  • Scheduled tasks
  • Macros
  • …and they even function well as small utilities, installers, configuration wizards, etc

Of course, batch files have come a long way since their first appearance in MS-DOS and OS/2; Windows NT (and later versions) expanded the range of built-in commands that were available, as well as offering a set of console programs to broaden what could be done from the command prompt. There are still a lot of operations (e.g. various shell commands, system configuration, etc) that cannot be included in batch files, but it is possible – sometimes requiring a bit of creativity – to do some very useful things in batch scripts.

This post focuses on a few practical examples relating to the management of desktop applications; including ClickOnce apps, mutual exclusivity, running programs as Administrator, etc

Mutually exclusive applications

I keep several batch scripts on my desktop that each start a set of programs; for example, I have a developer script that launches Visual Studio, SQL Management Studio and a few third-party tools. I also have a social script that launches my twitter client, instant messenger, etc. Some of these scripts overlap (i.e. the same program is included in different scripts) and, also, I may already have one or two of these applications open when I run the batch file.

In light of these requirements, I need to start the programs on my list only if they are not already running. Thankfully, this is not too difficult to do in batch scripts:

tasklist /FI "IMAGENAME eq [process name]" 2>NUL | find /I /N "[process name]">NUL
if "%ERRORLEVEL%" NEQ "0" {cmd /c} start [path to executable]

Where [process name] is the name of the process as it appears in the Windows Task Manager; e.g. firefox.exe – and [path to executable] is either the name of the executable to run (if it falls within the PATH variable) or the full path to the program you want to run. Note that the ‘cmd /c’ is optional (see the explanation below).

So, how does this work?

  • The tasklist command is the command-line equivalent of the Windows Task Manager. By invoking it with the /FI switch, you can apply a filter. You can filter according to a range of criteria, such as the process name, user, window title, etc – here, we just want to get an exact match (using the eq operator) on the name of the process.

The output of the tasklist command looks like this:

C:\Windows\system32>tasklist /FI "IMAGENAME eq firefox.exe"

Image Name                     PID Session Name        Session#    Mem Usage
========================= ======== ================ =========== ============
firefox.exe                   2404 Console                    1    434,096 K
  • By piping the output of the tasklist command into the find command, we can determine whether there was a match; by using the /I and /N switches, we perform a case-insensitive match and place the success/failure in the ERRORLEVEL variable (which is used by most command-line tools for this purpose).
  • We don’t want to output the results of the find command to the console, so we redirect it to NUL (i.e. nowhere).
  • If the find command returns a match, the ERRORLEVEL will be zero; therefore, we only need to run the program if its value is non-zero, i.e. NEQ “0”.
  • Running a command with cmd /c ensures that the script will go on running without waiting for the operation to complete. Some programs (e.g. Firefox) will tie up the console window if started from the command line, and we want to avoid this.
  • The start command is the preferred way to run GUI applications, and can also be used to open documents and URLs – we use it here because it supports the widest variety of targets.

Starting ClickOnce applications

Unlike normal executables, applications deployed using ClickOnce are started using a bootstrapper (which is normally handled by the shell). This includes applications written in Windows Forms, WPF or Silverlight. Shortcuts to ClickOnce apps have the extension .appref-ms (rather than .lnk for regular shortcuts), and these files are not recognised by the start command (as in the previous example).

Thankfully, they can be run using the following syntax:

rundll32.exe dfshim.dll,ShOpenVerbShortcut [path to appref-ms file]

Where [path to appref-ms file] is the full path to the ClickOnce shortcut file. The path may include environment variables such as %USERPROFILE% if, for example, you want to point to a shortcut on the user’s desktop.

How does this work?

  • The rundll32 command executes a function in a Win32 DLL as if it were an executable file. You might want to use this when you have a function which is predominantly called from code (but still operates independently), if you want to avoid creating many executables when a single DLL will suffice, or to deliberately obfuscate a command that users are not intended to start.
  • dfshim.dll is responsible for much of the functionality in ClickOnce; it contains functions to install, remove, start and update applications, and is distributed as part of the .NET Framework.
  • The name of the function we want is ShOpenVerbShortcut, which is the same function that the windows shell uses to run .appref-ms shortcut files. You simply pass a path to the function and it takes care of the rest.

Running processes as Administrator

Unfortunately, when Microsoft designed User Account Control (UAC) in Windows Vista and beyond, they did not provide a convenient way to start processes with elevated permissions from the command line. This may be partly due to the anticipation that PowerShell would take over from batch scripting, however I still think it is a bit of an oversight.

The way that you would normally go about running a program with elevated permissions is to use the ShellExecute function in Win32. Instead of using a conventional verb like ‘Open’ or ‘Print’, you use the special runas verb. Unfortunately, this part of the API is not exposed to the command line, and its complex argument list prevents it from being called using rundll32 (as in the previous example).

Thankfully, there is a handy command-line tool called ShelExec, which you can download and place alongside batch scripts that need to run programs with elevated permissions. Using ShelExec, you can run a program as Administrator using the following syntax:

shelexec /verb:runas [path to exe]

Where [path to exe] is either the name of the process to execute (if in the PATH variable) or the full path to the executable file.

Putting it all together

Okay, so what if I want a script that starts my web browser, e-mail client, Visual Studio (which I need to run with elevated permissions) and my twitter client? Here’s the script:


tasklist /FI "IMAGENAME eq firefox.exe" 2>NUL | find /I /N "firefox.exe">NUL
if "%ERRORLEVEL%" NEQ "0" cmd /c start firefox

tasklist /FI "IMAGENAME eq outlook.exe" 2>NUL | find /I /N "outlook.exe">NUL
if "%ERRORLEVEL%" NEQ "0" start outlook

tasklist /FI "IMAGENAME eq devenv.exe" 2>NUL | find /I /N "devenv.exe">NUL
if "%ERRORLEVEL%" NEQ "0" shelexec /verb:runas devenv

rundll32.exe dfshim.dll,ShOpenVerbShortcut %USERPROFILE%\Desktop\MetroTwit.appref-ms
  • We need the ‘cmd /c’ prefix in the case of Firefox, because it does ugly stuff with the console window if we allow it to use the same instance of cmd.exe that the script runs from.
  • Outlook can be started without having to specify the full path to the exe.
  • We need to use ShelExec to start Visual Studio as Administrator.
  • MetroTwit is a ClickOnce application, so we must run it using the bootstrapper.

I hope you find these tricks useful, and can apply them to save yourself a bit of time and effort.

When it comes to displaying the progress of a long-running operation to the user, there are many options available to a GUI designer:

  • Change the cursor to an hourglass
  • Change the caption on the status bar
  • Show an animated glyph on the window
  • Show a dialog box

A number of factors could affect which method you choose:

  • How long does the operation typically run for? A few seconds? Minutes?
  • Is the operation synchronous or asynchronous? If the operation is asynchronous, are there any restrictions on what the user can do while the operation is running?
  • Can the user cancel the operation?
  • Is the length of the operation known, or can it be easily estimated? Is it unpredictable?
  • Does the operation provide meaningful feedback to the user? (e.g. status messages)

ProgressDialog example

Shell Progress Dialogs provide a mechanism for displaying the progress of a long-running operation. They are suited to non-trivial operations that exceed 10 seconds, showing status messages and a progress bar. They facilitate both synchronous and asynchronous operations (though the latter are always preferable). They provide a mechanism for aborting the operation and for estimating the time remaining to complete the operation (computed using the progress percentage and time measurements taken as the operation progresses). If progress is not measurable (and the user is running Windows Vista or above), a marquee can be displayed instead of a normal progress bar.

This type of dialog box is part of the Windows shell, and is used by the operating system for a variety of purposes, from copying files to running the Windows Experience Index tests. Microsoft makes the dialog available to third party code via COM; exposing the IProgressDialog interface and a concrete implementation of the type (CLSID_ProgressDialog). This makes it easy to use progress dialogs in C/C++ applications, but what about managed code and .NET?

Instantiating COM Types in .NET

As with COM interfaces in general, the .NET Framework has a number of features to enable interoperability. In general, to instantiate a COM type in .NET, you need to:

  • Define the interface implemented by the type (in this case, IProgressDialog), based on its specification. Add any required metadata for marshalling method parameters and return types, as well as any enumerations required. Above all, remember to add the [ComImport] and [Guid] attributes to the interface definition.
  • Obtain a managed Type object for the COM type (not the interface), using the Type.GetTypeFromCLSID() method.
  • Use Activator.CreateInstance() on the Type to instantiate it. Cast the resulting object to the interface type, and you’re ready to start calling methods.
  • Don’t forget to call Marshal.FinalReleaseComObject() on the instance when you’re finished with it. A convenient way to ensure this is to place the cleanup code in a finally block, or implement IDisposable if the object is stored at instance scope in a class.

For IProgressDialog, the CLSID of the interface is {EBBC7C04-315E-11d2-B62F-006097DF5BD4} and the CLSID of the concrete class is {F8383852-FCD3-11d1-A6B9-006097DF5BD4}.

Using IProgressDialog

The MSDN Documentation for the Windows Shell API details the lifecycle of the IProgressDialog object and how to use it. In simple terms:

  • Set up the progress dialog before it is displayed to the user by calling SetTitle() and SetCancelMsg().
  • Show the dialog using StartProgressDialog(). A series of PROGDLG flags control the behaviour and appearance of the dialog box.
  • Show status messages using the SetLine() method. You can use up to 3 lines of text, 2 if you allow the estimated time remaining to be automatically calculated.
  • As you perform work, call SetProgress() to update the progress bar. Each call lets you specify the current value as well as the maximum value for the bar. The user can click the cancel button at any time, therefore you should also check its status using the HasUserCancelled() method.
  • When the operation has finished, call StopProgressDialog(). Note that, once you close the dialog box, you cannot show it again; you must create a new instance for each operation.
Type progressDialogType = Type.GetTypeFromCLSID(new Guid("{F8383852-FCD3-11d1-A6B9-006097DF5BD4}"));
IProgressDialog progressDialog = (IProgressDialog)Activator.CreateInstance(progressDialogType);

try {
    // set up dialog and display to the user
    progressDialog.SetTitle("Progress dialog");
    progressDialog.SetCancelMsg("Aborting...", null);
    progressDialog.StartProgressDialog(Form.ActiveForm.Handle, null, PROGDLG.AutoTime, IntPtr.Zero);

    // do work
    progressDialog.SetLine(1, "Working...", false, IntPtr.Zero);
    progressDialog.SetLine(2, "Please wait while the operation completes", false, IntPtr.Zero);
    for (uint i = 0; i < 100; i++) {
        if (progressDialog.HasUserCancelled()) {
        else {
            progressDialog.SetProgress(i, 100);

    // close dialog
finally {
    progressDialog = null;

A Managed Wrapper for IProgressDialog

Working directly with COM types is not recommended; due in part to the need to manually release resources, as well as having to work with the archaic programming model, marshalled types, structures, etc. COM types are also limited in comparison to .NET types, in that they do not (natively) support events or properties. For these reasons, I decided to create a managed wrapper for IProgressDialog.

As with other common dialog types already available in .NET (e.g. OpenFileDialog), my implementation inherits from the Component class, allowing it to be dropped onto the design surface of a Form or other component in Visual Studio. Component also has the advantage of implementing the IDisposable pattern, removing the need to write some boilerplate code.

My goals for the wrapper were to:

  • Replace method calls with properties (where possible).
  • Provide get accessors to complete the functionality of each property.
  • Replace the flag options with separate properties (to improve ease of use and intuitiveness).
  • Provide sensible default values for properties.
  • Simplify the operations by removing parameters and relying more on the object state.
  • Provide cleanup code in a Dispose() method.
  • Remove the need to re-instantiate the component for each operation.
  • Further simplify the use of the component by automatically closing the dialog when the progress reaches 100%.

With my wrapper class, ProgressDialog, you can configure the dialog at design time and thus write less code:

// set up dialog and display to the user

// do work
wrapper.Line1 = "Working...";
wrapper.Line2 = "Please wait while the operation completes";
for (uint i = 0; i < 100; i++) {
    if (wrapper.HasUserCancelled) {
    else {
        wrapper.Value = i;

// close dialog

Final Words

As I said in my introduction, there are many different ways to show progress in a GUI application. The real advantage of Shell Progress Dialogs is that they are rendered by the operating system (thus their appearance is ‘upgraded’ automatically on new versions of Windows, and they always fit the OS theme), and present users with a familiar and consistent interface. They’re not suitable for all applications, but I hope you find my implementation useful if you think they suit the needs of your app.

Download (9KB, includes example)

In many applications, the ability to dynamically load and execute third-party code (in the form of plug-ins) is highly desirable. Plug-ins can be used to provide:

  • Alternative implementations of built-in features
  • Completely new features (given some kind of framework on which they can be built)
  • All functionality in the application (e.g. in a test harness, compiler or other modular system)

There are a number of mechanisms within the .NET Framework to facilitate plug-in code:

Reflection enables assemblies to be dynamically loaded (removing compile-time references), and to iterate through the types in an assembly. To build a plug-in system based entirely on reflection, however, would be limiting, unreliable and very slow. The overheads involved in calling methods via reflection are high. Also, in the absence of a compile-time reference to an object, you lack the ability to verify whether the object contains the method/property you’re trying to access.

Interfaces are one of the key primitives in object orientated programming. They allow you to define the public methods, events and properties of a type without specifying how it should be implemented; or indeed, how it should behave. You can toss around a reference to an object using only its interface type, and still be able to do everything with the object that you could do with a concrete class (save for instantiating it, of course).

An obvious way to implement a plug-in system, therefore, would be to define a set of interfaces that were common to both the application and its plug-ins, implement them in the plug-ins and load them into the application using reflection. The advantage of this approach is that you have a contract at compile-time against which you can guarantee that the methods/properties you’re accessing exist on the object you’ve loaded.

There is quite a significant downside to this approach, however: You are loading third-party, potentially malicious code directly into your application’s memory space. The plug-in code could use reflection to access and manipulate everything in your application, not to mention crash it. It’s not hard to see why this would be a bad idea.

Application domains are an important (though perhaps not widely-understood) part of the .NET Framework. The vast majority of applications will only ever use a single AppDomain but, when utilised, they can be very powerful. Application domains sit at a high level in the runtime, providing a memory space into which the code you reference and execute is loaded. The only way to access managed objects from outside their AppDomain is to serialise them (a process over which you can exercise a lot of control) or to marshal them via remoting. An AppDomain can be secured to prevent another AppDomain from seeing inside it, loading assemblies and reflecting types. It can also raise and handle its own exceptions, keeping it isolated from the main process. This is definitely a solid foundation for a plug-in system.

It makes a lot of sense to load any plug-in code into a separate AppDomain, then ferry objects between the two domains in a controlled, sandboxed fashion. A general rule to consider when writing plug-in code is:

  • Use binary serialisation (the Serializable attribute or the ISerializable interface) when passing data between two application domains. Only ever pass an instance of a type that is common to both domains; e.g. a standard .NET type or a type defined in an assembly that is referenced by both domains. Any operations performed on serialisable types will run in the AppDomain that calls them (i.e. the main application).
  • Use remoting (handled transparently by MarshalByRefObject) when calling methods or raising events. Operations performed on remotable types will run in the AppDomain in which they are instantiated (i.e. the plug-in domain).

Why Not WCF?

At this point, you might well ask, “Why not use WCF as the basis for a plug-in system?”. It’s true, some developers do advocate this practice, but I personally do not. WCF imposes a number of show-stopping limitations on plug-in code:

  • WCF handles events poorly, requiring callback interfaces to be manually defined and wired up. Remoting handles events transparently.
  • Operations on WCF services can only exchange serialisable data. You can’t, for example, pass a remotable object to a WCF service to enable two-way communication.
  • WCF is primarily designed to be stateless. Plug-in code is almost always stateful. Although WCF handles sessions and concurrency, these can be difficult to use.
  • WCF uses XML serialisation based on public members of a type. Remoting uses binary serialisation and has the necessary permissions to access private members of a type.
  • WCF is optimised for interprocess and network-based communication, not communication between two application domains within the same process.

And, of course, it’s worthwhile to note that WCF itself is built on top of .NET Remoting; it is under no threat of deprecation, as it is a fundamental part of the framework.

More About MarshalByRefObject and Remoting

MarshalByRefObject is essential to any non-trivial cross-AppDomain functionality. It is handled specially by the .NET Framework; all you have to do is inherit from MarshalByRefObject and the framework will generate a transparent proxy for your class, automatically marshalling calls between the application domains for you.

Some important things to remember about writing classes that extend MarshalByRefObject:

  • Any objects you pass-to or return-from a MarshalByRefObject must be serialisable, or themselves a MarshalByRefObject.
  • You must remember to mark any types you derive from Exception or EventArgs with the [Serializable] attribute.
  • IEnumerable sequences created using the yield statement or LINQ cannot cross an application domain (because the compiler does not mark them as serialisable). Don’t return sequences from a MarshalByRefObject; instead, copy the elements into a collection (or use the ToList() method) and return the collection.
  • If you pass a delegate to a MarshalByRefObject – and the method it points to is in a different AppDomain – you must ensure that the method belongs to a MarshalByRefObject as well. Do not pass delegates to static methods, because they will be called on the wrong AppDomain (since there is no object instance to marshal the call to).

There are some other common types that can’t cross application domain boundaries:

  • DataObject (used for drag-and-drop, clipboard and other OLE functionality) is neither MarshalByRefObject or serialisable. You can either extract the data and pass it directly to the other AppDomain, or create a wrapper that implements IDataObject and inherits from MarshalByRefObject.
  • Image/Bitmap, although marked as serialisable, may not cross AppDomain boundaries. You should pass a Stream or byte[] containing the image data instead.

Lifetime Services and ISponsor

To further complicate matters, remoting uses lifetime services (rather than ordinary generational garbage collection) to determine when instances of a MarshalByRefObject should be cleaned up. By default, you have a window of 5 minutes in which to use an object obtained from a foreign AppDomain before the proxy becomes disconnected and an exception is thrown upon access. This is a necessary evil, because the garbage collector can’t count references to a remotable object inside a different application domain; that would break the isolation provided by application domains in the first place.

There are two methods to get around this, however:

  • Override the InitializeLifetimeService() method and return a null reference. This instructs remoting not to clean up instances of your object in another AppDomain. This has the potential to create memory leaks, so you can really only use this technique for singleton classes.
  • Obtain the lifetime service object (ILease) from the MarshalByRefObject using the RemotingServices class and register an ISponsor object to keep the instance alive.

Sponsorship works by renewing the lease on a MarshalByRefObject; it does this by returning a TimeSpan indicating how much longer the object is needed. Remoting will periodically call the Renewal() method on an ISponsor object until it returns a timespan of zero, or the sponsor is unregistered.

// register a sponsor
object lifetimeService = RemotingServices.GetLifetimeService(myMarshalByRefObject);
if (lifetimeService is ILease) {
    ILease lease = (ILease)lifetimeService;

// unregister a sponsor
object lifetimeService = RemotingServices.GetLifetimeService(myMarshalByRefObject);
if (lifetimeService is ILease) {
    ILease lease = (ILease)lifetimeService;

In practice, what this means is that you should hold a reference to a sponsor for any MarshalByRefObject you obtain from another AppDomain for as long as you need to access the object. When the sponsor object becomes eligible for garbage collection, it will also take out the remotable object which it sponsors. Ideally, implementations of ISponsor should be serialisable.

In my implementation of a plug-in system, I created a convenient generic class, Sponsor<TInterface>, which is simultaneously responsible for registering/unregistering a sponsor, accessing the remotable object itself and providing the renewal logic. You hold a reference to the sponsor object in your class, then call its Dispose() method when the remotable object is no longer needed. My plug-in system centers around the Sponsor class; ensuring that objects from the plug-in AppDomain are always wrapped in a Sponsor instance and never returned directly to user code without one.

Design for a Plug-In System

As I have alluded to, a plug-in system based on reflection, interfaces, remoting and sponsors is built around two application domains. The main AppDomain uses the PluginHost class to create the plug-in AppDomain and remotely instantiate PluginLoader, the class that loads plug-ins and instantiates remotable objects:

// create another AppDomain for loading the plug-ins
AppDomainSetup setup = new AppDomainSetup();
setup.ApplicationBase = Path.GetDirectoryName(typeof(PluginHost).Assembly.Location);

// plug-ins are isolated on the file system as well as the AppDomain
setup.PrivateBinPath = @"%PATH_TO_BINARIES%\Plugins";

setup.DisallowApplicationBaseProbing = false;
setup.DisallowBindingRedirects = false;

AppDomain domain = AppDomain.CreateDomain("Plugin AppDomain", null, setup);

// instantiate PluginLoader in the other AppDomain
PluginLoader loader = (PluginLoader)domain.CreateInstanceAndUnwrap(

// since Sandbox was loaded from another AppDomain, we must sponsor
// it for as long as we need it
Sponsor<PluginLoader> sponsor = new Sponsor<PluginLoader>(loader);

PluginLoader dynamically loads the plug-in assemblies (located in a subdirectory) into the plug-in AppDomain:

foreach (string dllFile in Directory.GetFiles(pluginPath, "*.dll")) {
    Assembly asm = Assembly.LoadFile(dllFile);

PluginLoader keeps a cache of ConstructorInfo objects for each interface implementation it discovers, so it can quickly instantiate objects. It exposes GetImplementations (returns IEnumerable<TInterface>) and GetImplementation (returns the first implementation of TInterface).

private IEnumerable<ConstructorInfo> GetConstructors<TInterface>() {
    if (ConstructorCache.ContainsKey(typeof(TInterface))) {
        return ConstructorCache[typeof(TInterface)];
    else {
        LinkedList<ConstructorInfo> constructors = new LinkedList<ConstructorInfo>();

        foreach (Assembly asm in Assemblies) {
            foreach (Type type in asm.GetTypes()) {
                if (type.IsClass && !type.IsAbstract) {
                    if (type.GetInterfaces().Contains(typeof(TInterface))) {
                        ConstructorInfo constructor = type.GetConstructor(Type.EmptyTypes);

        ConstructorCache[typeof(TInterface)] = constructors;
        return constructors;

private TInterface CreateInstance<TInterface>(ConstructorInfo constructor) {
    return (TInterface)constructor.Invoke(null);

public IEnumerable<TInterface> GetImplementations<TInterface>() {
    LinkedList<TInterface> instances = new LinkedList<TInterface>();

    foreach (ConstructorInfo constructor in GetConstructors<TInterface>()) {

    return instances;

PluginHost calls the GetImplementation/GetImplementations methods on PluginLoader to return transparent proxies to the remotable objects instantiated from the plug-ins. It wraps them in a Sponsor instance and returns them to the user. PluginHost also handles reloading/unloading of the AppDomain.

Putting It All Together

The general usage pattern for my plug-in system would be:

  1. Create a series of interfaces and place them in a common assembly.
  2. Create one or more plug-in assemblies containing types that implement the interfaces.
  3. Create an application which references only the common assembly.
  4. Instantiate PluginHost, passing the path to load plug-ins from.
  5. Call the LoadPlugins() method and check for success.
  6. Instantiate implementations of the plug-in interfaces using the GetImplementations() or GetImplementation() methods.
  7. Keep a reference to the Sponsor<TInterface> object returned from the above methods until the object is no longer required.
  8. Unload the plug-in AppDomain by calling Dispose() on the PluginHost object.

You can see an example of this in the included example project.

Final Words

Nobody can deny that loading third-party code in a separate application domain is regarded as best practice. Hopefully, this task is greatly simplified through the use of the plug-in system i’ve provided. It is, of course, simply a proof of concept implementation. Other things you might want to consider would be:

  • Applying security to the plug-in AppDomain to further sandbox the environment.
  • Filtering the plug-in assemblies loaded; either according to digital signatures, implementation of marker interfaces or particular metadata.
  • Making metadata about the plug-ins available to calling code.
  • Handling exceptions more gracefully.

In any event, I hope it demonstrates the basic idea behind cross-AppDomain programming in .NET.

Download (Visual Studio 2010 solution, zipped)