Once again, there’s nothing Media Center specific in this article, but this is functionality that the add-in requires nonetheless:

Storing Network Credentials Securely

A few instalments ago, we introduced the concept of storing network credentials as a way of improving the usability of a Media Center add-on; since the primary means of input is a remote control, we want to minimise text entry wherever possible. However, with storing credentials, we’re bound by responsibility to do so in a secure fashion. After all, this is sensitive information.

It was obvious, therefore, that some form of encryption should be used. However, it doesn’t matter how strong an encryption algorithm is if you don’t protect the key(s). We have to persist the keys in some way, otherwise there’d be no way of decrypting the credentials on the add-in’s next run. Software developed in .NET is particularly succeptible to dissassembling and decompiling, and it would be very easy to extract an encryption key if it were stored as a constant or string resource. So, this poses something of a challenge…

Enter, the Microsoft Cryptographic API. This API not only provides cryptographic algorithms (RSA, DES, etc) but also provides interoperability with the Windows key store. By using the CryptoAPI, the onice is no longer on your code to store and retrieve encryption keys; it’s all handled transparently and securely.

Design

StoredCredential class

The StoredCredential class holds all properties needed to access a network resource (domain, username and password), as well as a context (path). A determination is also made as to whether the password will be persisted; if not, the password will be held in memory for the life of the process only. The process of finding the correct credentials for any given path simply involves finding the instance with the deepest partially (or completely) matching path.

This type is designed to be stored in an ordinary XML application settings file. At run-time, the password can be decrypted. When serialised to XML, only the encrypted password is saved.

Implementation

The choice of an encryption algorithm is largely arbitrary for a project like this. I’ve chosen RSA because it’s well-supported and has a relatively long key length. To reduce overheads, the StoredCredential class uses a static variable containing an instance of RSACryptoServiceProvider to perform encryption and decryption for the session. In order to instruct the CryptoAPI to handle key storage for us, we merely have to provide a name for the key container:

CspParameters param = new CspParameters();
param.KeyContainerName = "NetworkCopy";
RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(param);

We’re then free to start encrypting/decrypting passwords without having to worry about state:

private static byte[] Encrypt(string input) {
    return sRSA.Encrypt(Encoding.ASCII.GetBytes(input), false);
}

private static string Decrypt(byte[] input) {
    return Encoding.ASCII.GetString(sRSA.Decrypt(input, false));
}

As you can see, all we have to do is transform the data into a byte array (using ASCII encoding in this case – this is appropriate for English language systems).

EncryptedPassword is implemented as an auto-property, because it has no special get/set logic. The Password property, however, uses the following logic:

private string mTempPassword;

[XmlIgnore]
public string Password {
    get {
        if (!IncludesPassword)
            return mTempPassword;
        else
            return Decrypt(EncryptedPassword);
    }
    set {
        mTempPassword = value;               
        if (Properties.Settings.Default.StorePasswords) EncryptedPassword = Encrypt(value);
    }
}

The XmlIgnore attribute ensures that the clear-text password is never serialised. If we’re not persisting the password, we can just get and set the temporary password variable; otherwise, we get the password by decrypting and set the password by encrypting using the aforementioned methods. The StorePasswords application setting simply determines whether passwords are being persisted.

And that’s pretty much it! When serialised, a stored set of credentials appears in a similar manner to this:

<StoredCredential>
  <Path>\\Computer\Share</Path>
  <Domain>MyDomain</Domain>
  <Username>MyUser</Username>
  <EncryptedPassword>WIGTDn5W/U0Gj9SxJVBd35G+XjBwzAQrULfteOhQMkavN9UZCijrhj4pcyV2J5EiCwzIvD0YT3DEsLXq2gKdKlE7uKLpZ3XNzZdw8pklXTuyT3KCk8bywvKyAUWf+CU7YUxjywAE9ltKgEb6WGi9QanNafUzxUgtd0IsHFBsQ4c=</EncryptedPassword>
</StoredCredential>

That’s All, Folks!

Yes, you heard correctly – this is the last instalment of my series on Media Center add-in development! Where do we go from here? Well, I will soon be placing the complete project code and binaries on a dedicated section of this website. Until then, I hope this series has given you a few pointers as to how to get things done in the Media Center SDK. I look forward to getting back to covering a diverse and interesting range of topics 🙂

Foreword: There’s actually no Media Center-specific content in this post, but it’s the logical continuation of the series…

Background Add-In – A Queue-Based Download Manager

Last time, we talked about using a Media Center-hosted WCF service as the basic architecture for the networking browsing and file copying add-in. This time, we’re looking at how that service is actually implemented.

Essentially, it’s an asynchronous design where enqueue/clear requests come in on the main thread, a download queue is processed on a separate thread and then, due to the highly-desirable progress notifications offered by the WebClient class when downloading in asynchronous mode, files are downloaded on yet another thread:

Download Manager Threads

The benefit of having a download queue, as opposed to spawning each new file as a concurrent download (which, incidentally, is what Windows Explorer does) is that it’s easier to track progress and, on low-bandwidth networks such as wi-fi, the batch will actually finish sooner overall. This is especially relevant for a media center PC, which is less likely to be running on ethernet.

The bread and butter of DownloadManager is the DownloadManagerItem class; this represents an item to download, and describes its source, destination and status:

DownloadManagerItem class diagram

As you can see, it also holds an optional StoredCredential object (which we introduced a few posts ago) which provides the credentials needed to access an item on a password-protected network share.

The DownloadManager class itself uses a Queue<DownloadManagerItem> to hold the downloads; push/pop are controlled using the lock statement to ensure thread safety. It uses a secondary collection, List<DownloadManagerItem> to hold downloads once they have been completed. This enables the history feature in the add-in, giving users a way of opening completed files or retrying failed downloads.

If credentials are supplied, the download thread will instantiate a NetworkSession object in order to run the download using those credentials. The DownloadFileAsync() method of the WebClient class will quite happily accept a UNC path, raising the DownloadProgressChanged event periodically.

To further support the interactive portion of the add-in, the download manager will raise a notification for a short period of time after an item is enqueued; this is achieved simply by having a NotificationVisible property which is set to true during enqueue and automatically resets itself after a few seconds. Similarly, for binding to UI elements, the download manager offers the IsDownloading, CurrentItem and CurrentFileProgress properties.

The entry point for the background add-in uses a ManualResetEvent to ensure that the main thread is blocked for as long as the service is needed.

Next Time

This series on Media Center application development is nearly over! Next time, we’ll be looking at the implementation of the StoredCredential class, and how it uses the built-in Windows cryptography service provider to securely store passwords for connecting to network shares. After that, i’ll be unveiling the completed Media Center add-in and giving it a permanent home in the newly-created “Projects” section of this site. See you then!

Using WCF for IPC in Media Center Add-Ons

Last time, I gave an overview of the design of my network browsing/copying add-on for Media Center. In this instalment, we look at how to use Windows Communication Foundation (WCF) as a means of interprocess communication (IPC) between the interactive and background portions of the add-on. Recall that each runs in a separate instance of the Media Center hosting process (ehexthost.exe); so even though the code for the add-on is contained within a single assembly, the two entry points do not share the same memory space or application domain, thus IPC is needed in order to communicate between them.

WCF is a service model (i.e. it is service-oriented) – as opposed to, say, .NET Remoting, which is a technology for instantiating objects remotely (hence it is essentially object-centric). What this means is that, rather than thinking of the problem in terms of which objects to share between the processes, we need to design a service contract which adequately describes everything we want to be able to access from the remote process. Since the interactive portion of the Media Center add-on frequently stops and starts, we consider it to be the client and the background process (which is persistent) as the service process.

Service Contract

We have some specific requirements for our service:

  • Enqueue an item for download
  • Cancel either the current or all downloads
  • Get the download history (completed/cancelled/failed items from the most recent batch)
  • Determine the currently-downloading item (if present)
  • Determine the current download progress
  • Be able to subscribe to change notifications (for when any of the above change)

In WCF, we define the service contract by writing an interface and decorating it with the ServiceContract and OperationContract attributes, for example:

[ServiceContract]
public interface IDownloadManager {

    double CurrentFileProgressRatio {
        [OperationContract] get;
    }

    DownloadManagerItem CurrentItem {
        [OperationContract] get;
    }

    [OperationContract]
    void Enqueue(DownloadManagerItem item);

    [OperationContract]
    void CancelDownloads(bool all);

    [OperationContract]
    List<DownloadManagerItem> GetHistory();
}

The above satisfies all of our goals, except for the last one (change notifications) – this is a little more complex and will be dealt with later. The next step is to implement the interface, to provide the functionality that executes in the service process. The only noteworthy point to make here is that we must decorate the implementation using the ServiceBehavior attribute, describing how it gets instantiated:

[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
public class DownloadManager : IDownloadManager {
    //...
}

For our download manager service, we want to use the singleton pattern, thus we need to indicate this to WCF. The download manager happens to use multiple threads, thus we need to indicate this also.

Publishing the Service

WCF makes it very easy to actually make a service available to the client side. We merely need to select the type of endpoint we want to use, assign a URI, provide an instance of the service implementation (DownloadManager) and open the channel. Since the service process and client process reside on the same machine, the most appropriate type of endpoint to use is Named Pipes. These also allow two-way communication, which we will have to use when implementing change notification. The entire service publishing process is achieved in just 3 lines of code:

// assume there is a class member of type ServiceHost called serviceHost
serviceHost = new ServiceHost(new DownloadManager(), new Uri("net:pipe://localhost"));
serviceHost.AddServiceEndpoint(typeof(IDownloadManager), new NetNamedPipeBinding(), "DownloadManager");
serviceHost.Open();

As long as the object is held in memory (or closed using the Close() method), the service will be available to the client process.

Enabling Change Notification

As we saw in previous instalments, change notification in Media Center is achieved by extending the ModelItem class and raising notifications using the FirePropertyChanged() method. There are three main hurdles to achieving this in our multi-process design:

  • Change notifications are raised on the interactive side of the add-on, and require a UI thread in order to operate.
  • Services are exposed as interfaces, so we cannot extend ModelItem on the service side.
  • Events (such as PropertyChanged) cannot be exposed using service contracts.

Thankfully, we can solve this by doing the following:

  • Defining a class which extends ModelItem, implements IDownloadManager (our service contract) and wraps the remote instance of our service.
  • Using WCF callbacks to allow the service implementation to call methods on the client side.

Implementing Callbacks

WCF callbacks work by defining a separate interface to represent methods on a client-side object that will be called by the service (which is then associated with the service contract via an additional property on the ServiceContract attribute). The client passes this object to WCF when the service is instantiated remotely. The service calls the client-side methods by retrieving the object (exposed via its interface) from WCF during a service method call. However, if we want the freedom to call client-side methods whenever we desire (i.e. outside the context of a service method call), as is required for enabling change notification, then we need to do something a little more complex…

  • Enable sessions on the service contract (this is achieved via an additional property on the ServiceContract attribute). This is necessary in order to ensure that the connection established by the client process when calling one service method is persisted and used for all others – and indeed, can be used to invoke callbacks between method calls.
  • Maintain a collection of connected clients and the client-side objects whose callback methods we can invoke. Even though we only think of the design in terms of service and client processes, it’s possible that a second client connection will be established before the first has been closed – hence, we must assume a 1:M relationship. This means uniquely identifying each client connection.
  • Expose service methods to enable clients to formally establish/close connections. A Subscribe() method will provide the service with the client-side callback object and issue a unique ID. The Unsubscribe() method will allow the client to stop receiving callbacks (by passing the unique ID from before).
  • When invoking a client-side callback, the service needs to iterate through the collection of clients. It must be prepared to catch exceptions, should a connection no longer be valid.

So, the service contract declaration changes:

[ServiceContract(CallbackContract = typeof(IDownloadManagerCallbacks), SessionMode = SessionMode.Required)]
public interface IDownloadManager { /* ... */ }

…we add the following service methods:

[OperationContract]
Guid Subscribe();

[OperationContract]
void Unsubscribe(Guid id);

…and then define the callback interface as:

public interface IDownloadManagerCallbacks {

    [OperationContract]
    void OnPropertyChanged(string propertyName);
}

The service implementation’s Subscribe() method obtains the callback object from the client using OperationContext:

IDownloadManagerCallbacks callback = OperationContext.Current.GetCallbackChannel<IDownloadManagerCallbacks>();

Implementing a Proxy Class

As previously indicated, change notification requires that we extend the Media Center ModelItem class and call its FirePropertyChanged() method in order to raise a notification in a manner that is thread-safe and supports object paths. When the client obtains a remote instance of the service, it is exposed only in terms of the IDownloadManager interface. Since we cannot force that object to extend from ModelItem, we need to write a wrapper/proxy class. This also enables us to implement the callback interface, IDownloadManagerCallbacks and wire up the callback method (OnPropertyChanged) to the FirePropertyChanged method from the base class.

The proxy class is implemented as follows:

public class DownloadManagerProxy : ModelItem, IDownloadManager, IDownloadManagerCallbacks {

    IDownloadManager dm;
    Guid id;

    public void Init(IDownloadManager dm) {
        this.dm = dm;
        id = dm.Subscribe();
    }

    protected override void Dispose(bool disposing) {
        if (disposing) {
            if (dm != null) dm.Unsubscribe(id);
        }
        base.Dispose(disposing);
    }

    #region IDownloadManager Members
    // all we do here is wrap the methods from IDownloadManager
    #endregion

    #region IDownloadManagerCallbacks Members

    void IDownloadManagerCallbacks.OnPropertyChanged(string propertyName) {
        FirePropertyChanged(propertyName);
    }

    #endregion
}

Now, the client process can call any method from IDownloadManager and expect it to call the corresponding service method. Additionally, it exposes the PropertyChanged event and can therefore be referenced in MCML and expect to receive change notifications whenever the service invokes the callback method.

Consuming the Service

In order to retrieve the remote instance of the service (which is then used to initialise the proxy object), the client process needs to create a channel. This channel must match the characteristics of the service endpoint, hence in this case must use Named Pipes and the URI we previously specified. A duplex channel is required in order to support callbacks. The full initialisation process is as follows:

DownloadManagerProxy proxy = new DownloadManagerProxy();

DuplexChannelFactory<IDownloadManager> factory = new DuplexChannelFactory<IDownloadManager>(
    proxy,
    new NetNamedPipeBinding(),
    new EndpointAddress("net.pipe://localhost/DownloadManager")
);

IDownloadManager dm = factory.CreateChannel();
proxy.Init(dm);

(Exceptions should be handled, as connections could fail – most notably, this will occur in Media Center during the first few seconds after startup, when the interactive UI is available but the background process has not yet been started.)

Final Words

The above process may seem convoluted, but it represents the best practice for designing a Media Center add-on which continues to perform operations after its interactive portion has been closed by the user. Communication between the background and interactive processes is necessary, and WCF is as good a technology as any with which to achieve this. Finally, change notifications are an absolute must for Media Center objects – without them, the UI can never be as rich and automated as it otherwise could be. These concepts may seem complex at first, but one can quickly develop a pattern for implementing them in future.

Next time, we’ll look at the implementation of the download manager itself.

I’ve noticed a worrying trend of late, when looking at code written by developers who are new to C#, or have never worked with the language prior to C# 3.0. I am referring to the misuse and overuse of the var keyword.

The purpose of var, for those who don’t know, is to omit the type name when declaring a local variable in situations where the type name is unknown, unavailable or doesn’t exist at the point where the code is written. The primary case where this is true is for anonymous types, whose type name is provided at compile-time. It is also used in LINQ where the result of a query cannot easily be inferred by the programmer, perhaps because it uses grouping structures, nested generic types or, indeed, anonymous types as well.

There seems to be a tendency for some programmers to use var for every variable declaration. Sure, the language doesn’t stop you from doing this and, indeed, MSDN admits that this is a “syntactic convenience”… But it also warns quite strongly that:

…the use of var does have at least the potential to make your code more difficult to understand for other developers. For that reason, the C# documentation generally uses var only when it is required.
Implicitly Typed Local Variables (C# Programming Guide), MSDN

I discovered recently that the commonly-used tool ReSharper practically mandates liberal use of var. Frankly, this isn’t helping the situation. There are some developers who try to argue the stance that var somehow improves readability and broader coding practices, such as this article:

By using var, you are forcing yourself to think more about how you name methods and variables, instead of relying on the type system to improve readability, something that is more an implementation detail…
var improves readability, Hadi Hariri

I agree with the premise of the quote above, but not with the end result. On the contrary, the overuse and misuse of var can lead to some very bad habits…

Let’s look at the argument against the widespread use of var (and for its sparing, correct use):

Implicitly-typed variables lose descriptiveness

The type name provides an extra layer of description in a local variable declaration:

// let's say we have a static method called GetContacts()
// that returns System.Data.DataTable
var individuals = GetContacts(ContactTypes.Individuals);

// how is it clear to the reader that I can do this?
return individuals.Compute("MAX(Age)", String.Empty);

My variable name above is perfectly descriptive; it differentiates between any other variables populated using GetContacts() and indeed other variables of type DataTable. When I operate on the variable, I know that it’s the individual contacts that i’m referring to, and that anything I derive from them will be of that context. However, without specifying the type name in the declaration, I lose the descriptiveness it provides…

// a more descriptive declaration
DataTable individuals = GetContacts(ContactTypes.Individuals)

When I come to revisit this body of code, i’ll know not only what the variable represents conceptually, but also its representation in terms of structure and usage; something lacking from the previous example.

‘var’ encourages Hungarian Notation

If the ommission of type names from variable declarations forces us to more carefully name our variables, it follows that variable names are more likely to describe not only their purpose, but also their type:

var dtIndividuals = GetContacts(ContactTypes.Individuals);

This is precisely the definition of Hungarian Notation, which is now heavily frowned upon as a practice, especially in type-safe languages like C#.

Specificity vs. Context

There’s no doubt that variable names must be specific, however, they need never be universally-specific. Just as a local variable in one method doesn’t need to differentiate itself from variables in other methods, a declaration that includes one explicit type need not differentiate itself from variables of a different explicit type. Implicit typing with var destroys the layer of context that type names provide, thus it forces variable names to be specific regardless of type:

// type provides context where names could be perceived as peers
Color orange = canvas.Background;
Fruit lemon = basket.GetRandom();

//...

// this is far less obvious
var orange = canvas.Background;
var lemon = basket.GetRandom();

// you can't blame the programmer for making this mistake
SomeMethodThatOperatesOnFruit(orange);

Increased reliance on IntelliSense

If the type name is now absent from the declaration, and variable names are (quite rightly) unhelpful in ascertaining their type, the programmer is forced to rely on IDE features such as IntelliSense in order to determine what the type is and what methods/properties are available.

Now, don’t get me wrong, I love IntelliSense; I think it’s one of the most productivity-enhancing features an IDE can provide. It reduces typing, almost eliminates the need to keep a language reference on-hand, cuts out many errors that come from false assumptions about semantics… the list just goes on.

Unfortunately, the ultimate caveat is that IntelliSense isn’t universally available; you can write C# code without it, and in some cases I think that programmers should! Code should be easily-maintainable and debuggable in all potential coding environments, even when IntelliSense is unavailable; and implicitly-typed variables seriously hinder this objective.

No backwards compatibility

One of the advantages of an object-oriented language like C# is the potential for code re-use. You can write a component and use it in one environment (e.g. WPF, .NET 3.5), then apply it in another (e.g. ASP.NET 2.0). When authoring such components, it’s useful to be aware of the advantage of that code working across as many versions of the language and framework as possible (without impeding functionality or adding significant extra code, of course).

The practice of using var for all local variable declarations renders that code incompatible with C# 2.0 and below. If var is restricted to its intended use (i.e. LINQ, anonymous types) then only components which utilise those language features will be affected. I’ve no doubt that a lot of perfectly-operable code is being written today that will be useless in environments where an older version of the framework/language is in use. And believe me, taking type names out of code is a hell of a lot easier than putting type names back in to code.

Final Words

I sincerely hope that people will come away from this article with a better understanding of the purpose of the var keyword in C#, when to use it and, more importantly, when not to use it. As a community of developers, it’s important to encourage good practices and identify questionable ones; and I believe that the overuse of var is certainly one such questionable practice.

In the last instalment, we looked at building a list control for Media Center. This introduced the ModelItem class, which is the basis for building the Model/View-Model in a Media Center application. With a reasonable amount of MCML now under my belt, I am shifting focus to the add-in itself. This article is designed to provide an overview of the add-in, with the topics introduced here to be the subject of upcoming posts.

As mentioned in earlier posts, the purpose of this add-in is to be able to browse and copy files from the local network onto the media center PC. The main goals are:

  • Browse local computers, shares and paths
  • Provide credentials to access network resources, if required
  • Copy files to media library folders (i.e. Pictures, Videos, Movies, Recorded TV, Music)
  • Download files sequentially in the background

Design

Class diagram for mceNetworkCopy

Entry Points and IPC

Architecturally, we need to break the add-in into two separate parts (entry points); one for the interactive portion and one to run the download manager. When the user navigates away from the interactive part of the add-in within Media Center, that process will be terminated, whereas the background process will persist. This technique is quite permissible under the Media Center SDK; it simply involves creating two different classes which implement the IAddInModule and IAddInEntryPoint interfaces, then specifying these in the registration XML file. In the diagram above, these are represented by the AddIn and BackgroundAddIn classes.

Tip: In order to stay resident in memory, entry points for background add-ins have to use ManualResetEvent (or a similar construct) to prevent the Launch() method from returning:

public void Launch(AddInHost host) {
    host.ApplicationContext.SingleInstance = true;
    mWaitForExit = new ManualResetEvent(false);
    mWaitForExit.WaitOne();
}

// ...

public void Uninitialize() {
    mWaitForExit.Set();
}

Even though both entry points are located within the same assembly, the fact that they are instantiated in separate processes means that some kind of inter-process communication is needed to communicate between them. Previously, I have used .NET Remoting in order to achieve this, however the need to explicitly extend MarshalByRefObject combined with the complexities of getting singletons to work has led me to switch to Windows Communication Foundation (WCF). Although WCF doesn’t support events like remoting does, it requires only a service contract (in the form of an interface, in this case IDownloadManager) to start serving a type (DownloadManager) remotely. (And although events aren’t supported, we can simulate their behaviour by using callback methods, which are supported.) In this design, we wrap the remote instance in a class which inherits from ModelItem, DownloadManagerProxy, so that we can bind its properties to the MCML markup on the interactive side of the add-in.

The service contract, proxy class and callback mechanism will be the subject of a future article.

Interactive Portion

The interactive portion of the add-in uses the FileList class to present the user with a list of computers, shares or directories/files, depending on the current path. The NetworkSession, NetworkBrowser and NetworkShare classes facilitate the enumeration of computers and shares, as well as allowing credentials to be used when necessary. (Note: These are 3rd-party classes – sources are listed in the section below.) These are used to determine a source path. A destination is selected by choosing from the list of MediaLibraryFolders (which resolves the media libraries in Media Center to their locations on disk) and then optionally drilling down to a subdirectory using the DirectoryList class. With a source and destination chosen, an item can be passed to the download manager:

Background Portion

The background portion of the add-in is responsible for managing downloads which are initiated from the interactive side of the add-in. The DownloadManager class maintains a queue of DownloadManagerItem objects, which represent the source, destination and status of files. When a new item is enqueued, the queue is processed asynchronously. The built-in WebClient class is used to copy the files themselves – I chose this over the copy methods in System.IO because the web client offers progress notifications.

The implementation of the DownloadManager class will be the subject of a future article.

Credentials and Password-Protected Shares

As touched on briefly, the NetworkSession class is used to access password-protected shares using credentials. It does this through impersonation, temporarily taking on the identity of the user represented by the credentials. For convenience, the add-in will retain credentials entered by the user. Since the built-in NetworkCredential class stores passwords in clear text, I designed the StoredCredential class, which holds the same information in a serialisable form, but encrypts passwords using the RSACryptoServiceProvider. Encryption provides obvious benefits, and using the crypto service provider adds the extra advantage of not having to manually create/store an encryption key – this is handled transparently by Windows.

The implementation of the StoredCredential class will be the subject of a future article.

Third Party Code

Accessing Password Protected Network Drives in C#
Provides the basis for the NetworkSession class, which allows the logged-in user to impersonate another user, given a username and password. Until the object is disposed, all normal I/O operations (System.IO) on password-protected UNC paths will use those credentials.

Retreiving a list of network computer names using C#
Used to obtain the list of computers on the local network; this code essentially does what the command-line NET VIEW tool does. Does not always include the local host, so I add it manually.

Network Shares and UNC paths
Exposes parts of the Win32 API that enumerate network shares, given a computer name. Can distinguish between file shares and other types of shares.

Get Directories Included in Windows Media Center Libraries
Provides the basis for MediaLibraryFolders, using a combination of PInvoke and Registry access to obtain the physical locations of the libraries used by Media Center.

Next Time

The next instalments will focus on the implementation of the design introduced in this article.