Live Fast, Code Hard, Die Young

Rake tutorial

Rake is a great tool for creating build scripts even if you primarily develop on Windows and normally don’t use Ruby. In my last article I showed you how easy it is to set it up and get started. This time I am going to show you some more examples of how it can be used.

One thing I really like about Rake build scripts is that you can divide the work into many small tasks that can be developed and tested individually. When you got them all working it is just a matter of chaining them together using the Rake dependency mechanism and you’re done. This speeds up the development of build scripts quite a lot in my opinion.

Dependencies

So how do we setup dependencies between tasks in Rake? Well, it’s very easy. The Ruby language makes the syntax pretty sleek as well:

task :deploy do
  puts "Deploying..."
end

task :default => :deploy do
  puts "Done!"
end

In this example I have two tasks called deploy and default. The default task is dependent on deploy, which means that Rake will run the deploy task first and if it is successful it will continue with the default task. Pretty simple really.

You can also create empty tasks that simply depend on others. If you have more than one dependency just surround the tasks in brackets:

task :release => [:newversion, :deploy, :package]

By default Rake will run the default task. If you want Rake to run the release task instead you can do that:

C:\>rake release

Internal and public tasks

Once I started filling my build script with different tasks I ran into the situation where I wanted some tasks to be considered “internal”. I wanted to run them on my own for testing purposes, but they were not meant to be run by others. Instead I wanted to expose some sort of public API of tasks to be run from the command line to guide the user of what was possible. Rake has a way of handling this by decorating tasks with the special keyword “desc”. If you put a “desc” statement before the task it becomes the public documentation for that task, which to me is a great way of exposing it as part of a public API. This is how you do it:

desc "Perform a deploy and package a release version"
task :release => [:newversion, :deploy, :package]

To view a list of documented tasks you simply run Rake with the -T parameter. Here is an example of how this looks in one of my projects:

An example of output from Rake

This gives a nice list of all the commands available to a user that is new to my project. I like this a lot. 🙂

If you want a little bit more involved example you can take a look at the rakefile that I use for one of my projects. You can find it on GitHub.

Next time I will show you how to use Rake to build .NET projects. Stay tuned!

Last year I started using Rake for my .NET build scripts and it has been a quite pleasant experience that I wanted to share with you. “Hmmm, isn’t Rake one of those scary Ruby tools from the other side of the fence?” you might ask. Yes, indeed, it is a Ruby tool! Many good things have come from the Ruby community but sadly many .NET developers seem to be scared of even learning about them. “Yuck, strange Ruby stuff from the Linux world! I can’t use my .NET knowledge and that must be a bad thing, doesn’t it?” Well, that’s a shame! Rake can be used on Windows! And besides, I think it never hurts to at least dip a toe in the ocean once in a while and experience something different than the usual stuff you work with. When a tool helps you to accomplish things with simple elegancy it couldn’t hurt to try it.

So what is Rake?

Rake is a tool for creating build scripts. It is a modern variant of the good old Make tool where you describe different tasks and the dependencies between them. Each task is a series of Ruby statements to be executed when the task runs. The flexible nature of the Ruby language makes the task descriptions very brief and elegant.

Here is an example of a task called “deploy” that copies files:

task :deploy do
    src_path = File.join(BASE_PATH, "Output/bin/.")
    dest_path = File.join(BASE_PATH, "Deploy")
    puts "Deploying files..."
    FileUtils.cp_r src_path, dest_path
end

Why use Rake?

Okay, now we know a little about what Rake is, but you may still ask why do we need build scripts at all? Can’t we do everything with Visual Studio and some pre/post build commands?

Sure, we can do quite well with just Visual Studio for small projects. But in larger and more complex projects we often run into situations that require some form of automation and we don’t want to do them every time we build with VS. We need scripts to do various stuff such as deployment and packaging. Maybe we need to update some files with version info, compile an installation kit, upload it via ftp to a server and so on. I’m sure you can come up with a range of time consuming stuff you do on a regular basis that could be done more quickly and reliably by a script instead. The more you can automate the tedious repetitive tasks the more time you can spend doing things that really matter to the customer.

So why not use a simple bat file to run some commands? Yes, that’s one solution but bat files have some significant problems. One is that the error handling is quite cumbersome, you have to use goto statements for flow control and you cannot easily reuse functionality across different bat files. In my experience, as soon as the complexity of the build script rises it will quickly blow up in your face and become a maintenance nightmare. Another glaring omission is the ability to express dependencies between tasks. That is one pretty important thing for structuring a build script successfully that bat files lack.

But what about MSBuild? Wasn’t it created for this purpose exactly? Yes, I have used MSBuild (and before that NAnt) and it sort of works, but it feels very limited. Both MSBuild and NAnt are XML based and declarative. For me, declarative programming is not very intuitive (XSLT anyone?). I feel that I have to think harder before I get things right, instead of just describing what to do in the order I would do it when performing the task manually. With MSBuild I get these set of predefined tasks that I can use, but if they don’t do exactly what I want I’m out in the cold. To me, it feels much more powerful and flexible to write code to do what I want in a build task, and easier to understand and maintain too. Maybe it’s a matter of taste so ultimately you’ll have to decide for yourself…

Getting started

Many people are scared because they think it is difficult to setup Ruby and Rake on Windows. This is not the case. To get started you simply run the Ruby Installer that you can download here:

http://rubyinstaller.org/

After installing Ruby, hit the Windows key and type “Ruby”. Then click “Start Command Prompt with Ruby” in the list that appears. This will bring up a command window where you can use Ruby commands.

To install stuff in Ruby you use “gem”. To install Rake simply type:

C:\>gem install rake

Now you are good to go!

Your first script

By default Rake will look for a file called “rakefile.rb” so create a text file with that name and write your first build task like this:

task :default do
    puts "Wonderful world!"
end

Then launch this script like this:

C:\>rake
Wonderful world!

Pretty simple, wasn’t it?

In a future post I will show some more examples of what the build scripts may look like, so stay tuned!

Windows 8 is here!

Feeling pretty excited after watching the keynote announcing Windows 8 today!

First and foremost it erases all doubts people have had about Silverlight/XAML going away. It’s not going away but instead it is becoming a primary and native technology for developing Windows apps. Another cool thing is that XAML is no longer restricted to .NET but it can also be used from C++ if you need to squeeze out that extra performance in your app. In addition to this you can also write apps using HTML5 and Javascript to run natively on Windows using the same new WinRT APIs. This is great because all those different technologies are good in different situations. To be able to choose the one you like is excellent. To me, it’s like Christmas!

Not only is this a great platform to build stuff for, but it also feels great that you can use the awesome tools Visual Studio and Blend to do it!

All in all, I just wanted to make a quick post about this because it’s such great news. Windows 8 has a lot of new cool features that I recommend you check out.

The preview version of Windows 8 will be available later tonight from http://dev.windows.com so make sure you try it out!

I’m happy to announce the first public release of Deployer – a deployment tool that we have been using in house for the past six years (and still continue to use today) to successfully handle updates for several large web sites and applications. It is written in C# using traditional WinForms and .NET 2.0 so it should run on most Windows systems (including Windows XP).

We are releasing this tool for free (and as open source) and hope you will find it useful too. Even though the tool was primarily developed to fit our needs it is pretty flexible and can be used in various situations. For instance, it does not have to be used to deploy a .NET project but can be used on any set of files where you need a little bit of control over what to copy and what not.

Now go ahead and download the tool or check out the source if you are more into that.

First look

So, still curious for more information? Well, let’s take a quick look at the main GUI of the application:

deployer

Deployer is similar to the Windows Explorer and shows the folder structure to the left starting from the root folder of your deployment project. The files in the selected folder are shown in the panel to the right. Files in red have been changed since the last deployment and need to be deployed again. This can be done by selecting them, right clicking and selecting “Add file(s) to queue” and then pushing the Deploy button.

A faster way to deploy files is to simply click the button in the toolbar that states “Queue files modified since last deployment” and then press the “Deploy queue” button next to it. This gives you a pretty quick workflow without having to remember what you have changed. Of course, you need to deploy at least once before you can do this.

Features

So what are the key features of the Deployer then? I would say the flexible deployment rules and the plugin system.

The Deployer determines the deployment method to use based on filters. This means that you can deploy different files using different methods and to different targets using configurable rules. Some files can be sent to one FTP site while others a sent to a completely different server. You can also rename files and folders so that it fits the destination structure.

Out of the box the Deployer has support for deploying files via FTP or to a network file share. However, it is extensible via plugins so you can add your own transfer protocols quite easily if you want. For example, at work we have developed plugins to deploy custom objects via a web service directly into a CMS system.

There is also plugin support for “hooks” which can be used to hook into the deployment procedure to provide special services. We are currently using this to stop and restart a Windows service on a remote server when you need to deploy a new version of it.

Multiple configurations are supported so you can have one for deploying to a test server, one for staging and another for live.

Unfortunately right now some features such as multiple configurations and deployment hooks can not be enabled from the GUI but require some manual tweaking of the project file. Usually this is pretty easy to accomplish since it’s just plain XML and quite easy to understand. Hopefully I’ll be able to implement support for configuring all of this from the GUI in the future.

History

Back in 2004 when I started developing Deployer we needed a tool for uploading files via FTP. Using a traditional FTP client was painful, since you had to go through each folder of a site and pick out the files to upload. We also had to deploy some custom objects to a CMS system by hand which was quite cumbersome… Clearly we needed something better. So, I developed a first version of the tool that used file filters to determine which files to deploy and where to deploy them and the ball started spinning. Soon we needed more features and I continued to work on the code and added things such as comparing database changes, deploying to multiple targets, specifying filter rules in subfolders, having several configurations and so on. This little tool grew into something that we found quite useful at our company. I’m hoping that by releasing it to the community it may help you too.

Download & source

You can download the setup from Github: https://github.com/downloads/pontusm/Deployer/DeployerSetup.exe

If you want to peek at the code and maybe even contribute you can find it here: https://github.com/pontusm/Deployer

Are you experiencing that VS2010 is suddenly highlighting the current line in the text editor? Do you want to get rid of it? I’ll show you how – or maybe convince you to keep it!

 

Line highlighting

First of all, let me explain what I’m talking about in case you haven’t figured yet. Current line highlighting is when the entire line that your cursor is on is shown in a different color. In my case it looks like this:

image

This is not part of the default functionality in VS2010 so most users never see this. However, if you happen to install the Productivity Power Tools it will be enabled by default.

The idea is that highlighting the current line will make it easy to see where your cursor is when you have a large monitor and get lost.

Some users probably love this feature. Personally I don’t like it at all when my cursor “taints” the current row like this when I’m editing. I find it plain annoying!

 

Turning it off

The first solution if you don’t like it is obviously to turn it off completely. This is pretty easy to do using the settings which you can find under Tools->Options and then Productivity Power Tools. Just flick the switch to Off:

image

Now if that was all I had to say this wouldn’t be a useful blog post. Let’s see what we can do!

 

Making it work better

Instead of turning it off we can make better use of this feature. I actually think it is a very good idea to show where the cursor is. However, once I have found the place and start typing the highlighting is just in the way for me.

I have come up with a better solution that involves changing the color of the line depending on if the edit window is active or not. This is possible through the settings under Environment->Fonts and Colors. There are two entries for the Current Line color, one for when the line is active and one when it is inactive:

image

I have set the active color to White which means that when I’m in the edit window and typing the highlighting is hidden. When I navigate classes in the Solution Explorer or elsewhere it will light up. This will effectively work very nicely because when I’m working with other panels or windows and returning to the edit window I can find where I was pretty quickly.

Give it a try and see if it works for you! 🙂

 

Conclusion

Line highlighting is a pretty controversial feature. Some swear by it and some curse it. I hope my post shows one interesting hybrid way of using it that does not get in the way as much as the default behavior.

The Productivity Power Tools is a really nice addition to Visual Studio and I highly recommend it. Through the settings you can customize it even more if you don’t like how it works. I suggest you play around with it and see if you can find other useful tips and tricks to share. I’d love to hear about them so feel free to leave a comment on my blog! 🙂

Getting digital cable TV (DVB-C) to work in Windows 7 Media Center (MCE) can be really difficult and there isn’t much information available on what to do. Watching encrypted digital cable TV isn’t even supported by Microsoft so how do we solve this? Well, there are a couple of products that can help and one of them is the Anysee E30C digital cable TV card. It has a pretty cool driver that pretends to be a DVB-T card (which MCE supports) and it properly handles encrypted pay TV channels (provided that you have a valid smart card and subscription of course).

 

Preparations

This should probably be obvious, but the first thing you need to do is to install the latest drivers for the Anysee card. The drivers on the CD you got when you bought the card are probably outdated so make sure you get them from the Anysee website.

Remember to reboot after installing the drivers! Installation doesn’t require you to restart but I’ve had weird problems when I skipped this…

Now, before even trying to get the card working in MCE you should ensure that TV works in the Anysee Viewer application. (You may need to manually scan for channels if your network provider is not one of the supported ones.)

 

Watching Canal Digital Sweden

The Anysee card comes preconfigured with settings for different cable TV networks around Europe. However, Canal Digital Sweden is not one of the supported providers. We need a little bit of investigation and configuration to make this work. (This is something you can do for your provider too if needed.)

First of all we need to figure out the settings that Canal Digital uses in Sweden. I did this by examining the information menus in my set top box which had a screen listing channel information with frequencies and symbol rate:

IMG_0221

This revealed the channel frequencies and that symbol rate 6952 is used. Some searching on the internet also revealed that QAM 64 is used (which I think is the most common modulation in Sweden). After we have this information along with the frequencies for the channels we are ready to start configuring.

 

Configuring Anysee CNO

The Anysee CNO application is responsible for simulating the DVB-T card that MCE uses for watching TV. By default it resides in the C:\Program Files\anysee\Driver directory (or C:\Program Files (x86)\anysee\Driver if you are running 64-bit Windows).

If you look in the subdirectory Transponders\Cable you will find a bunch of preconfigured files for different networks. Unfortunately there is no settings file suitable for Canal Digital Sweden here with QAM 64 and symbol rate 6952. What I did was that I just copied a similar one and renamed it to EU_64QAM_6952.TPL as shown below:

image

Then I edited the file to make sure the symbol rate said 6952 and verified that the frequencies matched the ones listed by my set top box. In my case everything looked alright so all I had to do was to change one line in the .TPL file:

Symbolrate: 6952

After adding a new settings file we also need to tell the CNO application about it. This is done by editing the CNO.CNO file found in the application directory C:\Program Files\anysee\Driver:

image

In the CNO.CNO file at about line 143 or so you should find a [Cable-List] section. Just add your new file here similar to what I did:

image

It should look something like this when you are done:

[Cable-List]

// ±¹°¡ÄÚµå, Áö¿ª, ¼­¹ö½ºÁ¦°øÀÚ, ¸ñ·ÏÆÄÀÏ

-1, , Europe_64QAM_6875, .\Transponders\Cable\EU_64QAM_6875.TPL

-1, , Europe_64QAM_6900, .\Transponders\Cable\EU_64QAM_6900.TPL

-1, , Europe_64QAM_6952, .\Transponders\Cable\EU_64QAM_6952.TPL

-1, , Europe_128QAM_6000, .\Transponders\Cable\EU_128QAM_6000.TPL

-1, , Europe_128QAM_6875, .\Transponders\Cable\EU_128QAM_6875.TPL

-1, , Europe_128QAM_6900, .\Transponders\Cable\EU_128QAM_6900.TPL

-1, , Europe_256QAM_6875, .\Transponders\Cable\EU_256QAM_6875.TPL

-1, , Europe_256QAM_6900, .\Transponders\Cable\EU_256QAM_6900.TPL

358, , Iisalmen_Puhelin_Oy(IPY), .\Transponders\Cable\Finland_IPY.TPL

358, , TSF, .\Transponders\Cable\Finland_TSF.TPL

358, , Entire, .\Transponders\Cable\Finland_Cable.TPL

Now you need to restart CNO and change the settings. You can exit CNO by right clicking its icon in the tray bar and selecting Exit.

image

After shutting down CNO just relaunch it from the program files directory (C:\Program Files\anysee\Driver\CNO.exe). Then locate the tray icon again and select Settings from the context menu. This will bring up the settings dialog where you can now find your new settings file that is suitable for the Canal Digital Sweden digital cable TV network:

image

Configuring Media Center

Next thing to do is to actually get it working in MCE so start it up and go to the Settings->TV->TV Signal to setup the new card:

image

In the next screen select Setup TV Signal and then select your region and enter your postal code. I’ve experimented a bit with these settings but as far as I know it doesn’t matter if you enter 00000 for postal code or the correct one. I suspect it may have something to do with the guide listings but I’m not sure. Things seems to work no matter what I choose here but try entering correct data to make sure you don’t screw up things.

After agreeing to the license you get to choose the type of signal to receive and this is important. You need to select “Antenna” here because that is what MCE supports best and it is also what Anysee CNO simulates:

image

Then select “No” for set-top box and in the following screen choose Digital Antenna (DVB-T) signal:

image

When it asks if you want to set up any other TV-signals select “No” and then proceed to scan the channels. This should take a while but when it is ready you should be able to watch Live TV in MCE using your Anysee card!

JetBrains TeamCity is a wonderful product that we use for build management and continuous integration in our .NET and Java projects. The latest version adds support for .NET 4 among other things. However, it does not come with support for running Silverlight unit tests out of the box. In this post I will describe what I did to set this up in TeamCity 5.1.

 

Building the Silverlight project

First of all I am assuming that you have setup a configuration in TeamCity that is able to build your Silverlight project successfully on the server.

I had some problems with building our Silverlight project at first because we are using WCF RIA Services and there does not seem to be any way to install the SDK without having VS2010 installed. Well, actually there is one way but it doesn’t install the Silverlight client libraries that you need to build but only the things you need to run RIA projects on the server. I finally gave in and installed VS2010 on the server although I didn’t like it. However, that solved the build issues for me.

Setting up StatLight configuration

Next we take advantage of a lovely little tool called StatLight. It is used to run Silverlight tests more efficiently when you are practicing TDD. It runs the tests without showing the browser window that the regular SLUT framework uses and it has a really nice “continuous” mode that can monitor your project and re-run tests automatically whenever you rebuild your solution. Last but not least, it has support for producing TeamCity compatible reports of the test run!

To use StatLight with TeamCity we need to create a new build configuration to run a command line build. I set it up like this:

General Settings

You can set the general settings as you like but I used the defaults for most of it.

image

Version Control Settings

Under the VCS settings I made sure to set the same checkout directory as I have in my main build configuration. The idea here is to reuse the output from that project when we run this one.

image

Build Runner

Next is the Build Runner settings. Here I have specified that I want to run a command line tool and the path to it is:

BuildTools\StatLight\statlight.exe

This will run StatLight from within the checked out sources where I have put my tools used for building.

If you want you can of course put StatLight in some other path on your server, but I like to include the tools needed to build in the version control system so the right versions of the tools for a project are always present. This way I can upgrade tools in one project and simply commit them in version control and have the build server pick it up automatically.

Next I also needed to configure the parameters to give StatLight:

-x="Source\Tests.Gws3.Client\bin\Release\Tests.Gws3.Client.xap" -v=April2010 –teamcity

This tells StatLight where to find the XAP-file with the tests. (This file was actually built using our main build configuration so we need to setup a dependency on that.) We also specify which version of the testing toolkit we want to use and that we want StatLight to output a TeamCity compatible report.

image

Build Triggering

Next up is Build Triggering settings. When do we want to run our tests? Well, I think it is a good time to run them whenever our main build has been built successfully so I setup a build dependency trigger for that.

image

Dependencies

So are we done yet? No, not quite…We also need to specify that our build configuration is dependent on stuff from another build project. This means that if that project is out of date it will be rebuilt before we run ours.

image

 

Conclusion

It seemed like a lot of steps to get it all up and running but it really isn’t that much work and once it is setup you can enjoy full continuous integration bliss for your Silverlight projects as well!

If you haven’t tried out TeamCity yet I suggest you check it out! It is free for up to 20 build configurations which should be more than enough to get you started.

One thing I find quite annoying in Visual Studio is when I try to modify an existing database table and get the following error dialog that prevents me from saving:

image

“You cannot save changes that would result in one or more tables being re-created”

So, what the heck is up with that? This seems to appear when you move columns around or change the “Allow Nulls” setting. I know I could do this in earlier version of Visual Studio so why can’t I do it anymore? Yes, I know I’m changing the table structure fundamentally but I know what I’m doing and it should be safe to do it – just let me save please!

Well, the wall of text in the dialog actually tells you what to do. You have to enable this in the VS options. For some reason this is disabled by default and my guess is that it is to prevent people who have no clue what they are doing from destroying the database schema…

Anyway here is where you find the option that you should disable. It is called “Prevent saving changes that require table re-creation”:

image

Does this mean that you entire table data will be erased when you make a change? No, certainly not! All it means is that your data will be copied to a temporary table and then copied back when the table has been altered. I guess this is how it has always worked but before you never had to bother about it – it just worked.

Well, thanks to this little option it now works again!

A really nice feature in Resharper 5 is the ability to adjust namespaces for code files in a directory. When you move files around in your project or between different projects the namespace no longer matches the directory structure. Updating the namespaces in the code files by hand is quite tedious and error prone. This is especially true when you move user controls or web pages that consists of a XAML file or ASP.NET page with a code behind class that needs to match.

You can adjust the namespace for all files in a directory by simply right clicking the folder in Visual Studio and select “Refactor->Adjust Namespaces…”:

image

This will give you the following dialog where you can see the changes that Resharper suggests:

image

Quite handy I think!

The Managed Extensibility Framework is a new wonderful addition to .NET 4 and Silverlight 4. The main purpose of MEF is to handle the extensibility and plug-in capability of an application. It features a very simple and elegant method for creating objects and resolving dependencies by decorating your code with import and export attributes.

Managing and resolving dependencies and creating objects is pretty much what a simple IoC (Inversion of Control) container does. So, can we use MEF for this?

The MEF man Glenn Block once said that ”you should use MEF to manage your unknown dependencies and an IoC container to manage your known dependencies.” However, I have found that MEF can work pretty well as your one stop solution for all dependencies. It is especially nice if you are building a Silverlight application that already uses MEF (for example to download XAPs dynamically). Why include a separate third party IoC container when there is one built in already?

Here is a short guide for those who want to use MEF for dependency injection in Silverlight.

MEF libraries

First of all, you need to add a reference to the MEF libraries. They are included with Silverlight 4:

image

System.ComponentModel.Composition – You need this reference anywhere you use the basics of MEF such as importing and exporting.

System.ComponentModel.Composition.Initialization – You need this reference where you actually initialize and configure the container.

ServiceLocator class

To use MEF for resolving dependencies I have built a simple class to set things up and provide a way to create or retrieve instances. This is the code for a simple version of this class:

    public static class ServiceLocator
    {
        private static CompositionContainer _container;
        private static AggregateCatalog _catalog;

        public static void Initialize()
        {
            _catalog = new AggregateCatalog(new DeploymentCatalog());
            _container = CompositionHost.Initialize(_catalog);
        }

        public static T GetInstance<T>()
        {
            return _container.GetExportedValue<T>();
        }
    }

The Initialize() method sets up an AggregateCatalog and feeds it with a DeploymentCatalog. The AggregateCatalog can contain many MEF catalogs that are added at runtime. Using an AggregateCatalog is not absolutely necessary in this simple example but it is a preparation for loading dependencies dynamically in the future.

The DeploymentCatalog is used in Silverlight for working with XAP files. It is a very handy class for loading external XAPs if your application is split up in modules. Creating a DeploymentCatalog without any parameters will create a catalog for the main application XAP file and this will allow us to access the exported types in all the assemblies of our main XAP file.

When we call CompositionHost.Initialize() our assemblies will be scanned by MEF and all exports are discovered. We are then ready to call the GetInstance<T>() method whenever we need an instance of an exported class.

Exporting types

Traditional IoC containers often involve configuring components using either XML or registering them using code. With MEF you simply put an [Export] attribute on your class to indicate that it should be available for composition:

    [Export]
    public class Car
    {
        ...
    }

This will expose the type Car to MEF and lets you retrieve it in IoC fashion like this:

    var car = ServiceLocator.GetInstance<Car>();

Often you want to expose a certain interface that your type implements. You can do this by supplying the type in the [Export] attribute like this:

    [Export(typeof(ICar)]
    public class Car : ICar
    {
        ...
    }

The default behavior in MEF is to treat exports as singletons. That is, if you don’t specify anything to override this behavior you will only get one instance of the Car object in your application no matter how many times you call GetInstance<Car>(). Actually, if the exported type does not specify anything the default is to allow either singleton or non shared so it is up to the caller to decide what he wants.

If you want unique instances to be created every time you retrieve an instance of your type you can specify this using an extra attribute on your class:

    [Export]
    [PartCreationPolicy(CreationPolicy.NonShared)]
    public class Car
    {
        ...
    }

Dependency injection

Of course an application is seldom built with only a single class. Most likely you have many parts that fit together and you want to use the IoC container to resolve everything smoothly for you using dependency injection. MEF can do this as well, but you have to decorate the constructor to use with the [ImportingConstructor] attribute:

    [Export]
    public class Car
    {
        [ImportingConstructor]
        public Car(Engine engine)
        {
        }
    }

    [Export]
    public class Engine
    {

    }

When retrieving a Car instance MEF will automatically create and inject the Engine in the constructor of the class just like an ordinary IoC container would.

All in all, those 20 lines of code is all you need to get started with MEF as an IoC container. You can now retrieve instances of your objects with injected dependencies and that’s pretty much what you need in most cases.

Conclusion

I wanted to keep this example as simple as possible and have deliberately left out topics like object lifetime, dynamically loading XAPs and so on. I am planning to write about those things in upcoming posts.

I think MEF provides a great alternative to the traditional IoC containers. It is very easy to setup and get started with. I also like the fact that all configuration sticks with the class, which makes it easy for newcomers in your project to pick things up quickly and write new components simply by looking at an existing class.

I believe MEF is going to be central in all .NET 4 development. Don’t miss out!