Live Fast, Code Hard, Die Young

Archive for the ‘Uncategorized’ Category

Mobile?

So I’m sitting on the bus and writing this text on my mobile phone. It takes ages to write this silly little message but it was fun to give it a try! I guess I need a better phone…

Test driven development

Ok, I have to confess… I have become a test junkie.

Why? Because I have now finally understood what I was missing all along.

Historically I have always thought that unit tests are great to have but a pain to maintain. Nobody wants to work with testing. Nobody wants to write testing code. If you write a unit test it will over time just become the ugly child that no one wants to take care of. That was my experience just a year ago. I mean, I understood the point of using unit tests and thought it was really valuable for quality assurance but I just could not find the motivation or time for me or my team to do it properly. It was more like "yeah, unit tests are great to have, but who is going to write them and keep them up to date?"

At this time I clearly had not figured it out yet.

I started working with unit testing around 2001 while working in VB6 (VBUnit). Although I valued the point of having the tests, it was very boring to write them and on top of that they were almost never used later so the work felt really wasted. Then I moved to .NET (NUnit) and continued to write unit tests sporadically as a way of testing code without having to write a dummy GUI to run it. This came about thanks to TestDriven.net that made it easy to run and debug tests with a simple click right in the code. This was probably the first step on my road to fully appreciating the value of unit tests.

Back then I definitely saw unit tests from the wrong point of view.

When I first heard about test driven development (TDD) I was very interested and curious. How is that really done in practice? To me, it sounded really awkward and cumbersome. I mean, I had always been struggling with getting the infrastructure and test database in shape in order to just being able to run the tests. And over time it always deteriorated into a configuration and dependency mess that made it hard to make the tests run, which lead to the test code being commented out, So to me, TDD sounded quite complicated and time consuming. At this time I had still not understood what I did wrong.

Now here’s is the thing… Unit testing is not about testing! Or rather, it is about more than testing.

What’s in it for me?
As a developer you may think "what’s the point of this thing that you are trying to convince me to use? Who will benefit from it?" Most developers are lazy and don’t want to do anything other than write code. Preferably the least amount of code needed to solve the task at hand. That’s perfectly natural.

So what it is that is so good about TDD? Is it just a lot of extra work and no gain? Well, the name is confusing to many people because they hear the word "test" and testing is boring so it must be crap. Some people are trying to promote the term behavior driven development (BDD) which is perhaps a better description.

Here are the key features of TDD as I see it:

  • Quick development
    When you get up to speed with TDD it can really boost your efficiency as a developer.

    I’m a very thorough developer and I seldom write code that I do not test before deploying. Usually when I write some tricky code I have single stepped through it with a debugger just to make sure that every branch of it was working as intended. Of course, this kind of manual code coverage takes time and it has to be repeated when code or requirements change.

    With TDD I have now become more confident in the tests verifying that all is ok. If the test says that the code returned the right thing, the code must be working. I don’t have to single step through it. If I’m the slightest uncertain about it, I just add more asserts to the test, or write another test to check that I didn’t miss anything, and when I get the green light I know all is ok. It’s darn quick – you just have to churn out code like there is no tomorrow!

  • Easy setup
    In classic development you often have to setup a complicated environment just to make things run. You may need a web server, a database, configuration files etc. It just becomes this big hurdle you have to beat just to get going. And in a large project, when something breaks you get completely stalled.

    With unit testing and especially mock objects you can mock all external dependencies so you don’t have to worry about it. Your class just needs "stuff" and you can feed it directly from the test so that it is nicely isolated from the complexities of the real world. Very simple and easy.

  • Refactoring
    We all know that refactoring is important. Constant refactoring is the key to avoiding code rot and cleaning up technical debt. Code will without doubt deteriorate over time and a sign of when it has gone too far is when all developers on a project wants to throw out all the old code and rewrite it from scratch.

    Small refactoring is usually safe and quite easy with the tools we have today. However, the key to successful refactoring is to dare to make larger changes when deemed necessary. If you have unit tests that verify that the behavior of the code is intact, you feel much more confident in making large scale changes. Otherwise you tend to play it safe and postpone it indefinitely, thus failing to clean out the code rot.

  • Specification + verification
    When you develop software you usually have some kind of specification. Formal or informal, but you know that your code should do something to fulfill a goal. When you write the code you make sure that it works as intended and you test that it does. But can you be sure about it as time passes? If another developer goes into your code and adds or changes functionality, can you guarantee that the code still works as you intended it to? Does the other developer know about the intent of the original code? You never know, unless you are the only developer on the project.

    Unit testing gives you a way of writing down the specification in code and verify that it is followed. It’s like a recorded conversation between the test and the real code:
    "If I send you this, you return that."
    "If I do this, you should do that."

    And this specification is not just stuffed away in some documentation file but it is part of the code in your project, continuously verified and updated as the code base evolves and changes.

    Treat the code implementing a feature and the unit testing code that verifies it as an equally important part of the project.

  • API design and architecture
    This is a thing that I really love. When you write a unit test you are actually writing it in the way to you want the API to your class to be working. It is such a simple and great way of ensuring that your code is easy to use.

    Additionally since mocking usually requires you to think about Separation of Concerns it actually forces your architecture to become more elegant and loosely coupled which is a great thing.

  • Documentation + sample code
    When you write unit tests you are actually writing sample code for how to use your own classes. Writing documentation is usually not very fun, but in this way the documentation practically writes itself. If someone on your project wonders how to use the "SpiffyRegulator" class, he can just go to the "SpiffyRegulatorTests" class and look it up.

Beside these features there is also another point I have to make. You always need to have some code calling the code that you are currently developing. In traditional development you test your code in the application that is using it which often makes it a bit more complicated and time consuming to test. In TDD you use a unit test and you can simulate things that are hard to do in a real environment. Which way is the best? You have to decide for yourself.

You would think that writing twice as much code would slow you down, but in fact once you get up to speed this method is extremely fast. You seldom need to run the code in a complicated environment that takes a while to setup and launch just to create the specific scenario you need to test. And you seldom need to step through the code with a debugger which can be a slow and inefficient method of developing. When you are confident in your tests, you know that you have covered all aspects of your code – not just today, but every day from now on since the tests are run continuously.

But don’t write tests if the code isn’t worth testing. If the code is dead simple there is no point in having a unit test. Unit tests should be helpful and give some value to you as a developer for trying out your code. Don’t fall into writing tests just to have tests… However, if you are the slightest hesitant whether the code works as it should then it should be tested (and in different ways to get good code coverage).

I believe in doing a little bit of both. TDD is great because the turn around is quick and you can maintain a good pace when developing. However, eventually you always need to make it work in the real system which may require some changes or tweaks.

What I did wrong
Many people try unit testing in the wrong way and get burned. I was one of them. But what was it that I (and most people) did wrong? Well, here are the big mistakes I did:

  • I wrote integration tests instead of unit tests.
  • I did not run the tests continuously.
  • I wrote tests last instead of first.
  • My tests hit the database which equals lots of overhead.
  • I wrote test just for the sake of having tests, instead of as a way to help me write new code.
  • I did manual code coverage testing.

An integration test is used to test how components of the system work together. If you call a class from your test and it calls another class which calls other classes or maybe hits the database, you have a problem. It’s no longer a unit test but an integration test. Integration tests are bad because they couple a test to a lot of you code base which makes it cumbersome to perform changes and refactoring. A small change in your code may affect and require update to hundreds of test cases. That’s bad. Integration tests are also much harder to setup and run because of all the dependencies. In my experience these tests often break and since it is a lot of work to make them run again we usually commit the sinful act of just commenting them out!

This leads me to the next important point, which is that we did not have any continuous integration system in place for running the tests automatically. It was up to each developer to run tests on their own and you can probably guess what happens then… Yes, nobody will run them. And if no one runs them, no one will notice when a test breaks, so the test may be broken for several months before discovered. And when it is discovered it is usually at a very inconvenient time so we will "comment it out" and "fix it later". Yeah, like it will happen…

The next thing I did wrong was plain stupid. This was mainly back in the old VB days but I will mention it here anyway because I believe it is a common mistake for beginners. What I did was that I wrote all the code just like before in the classic way, and even tested and debugged it from a dummy test GUI and then when it was finished I wrote the unit test. How silly is that? Well, no wonder I felt that writing unit tests was a boring task. You must write tests first or at least in parallel with the main code. Adding unit tests after you already written and debugged all the code is going to make you work slower instead of faster. It may add some quality assurance to the project, but that’s not directly beneficial to me as a developer and therefore the activity feels meaningless.

The last point I want to mention is that running unit tests should be fast. You should be running hundreds of tests in less than a second. If your tests hit the database this is not going to happen. It will be dead slow, and besides you get all the messy setup problems that I mentioned before because the database must be initialized to a known state. I used to write tests using hard coded database ID’s using the data that happened to be in the database when the test was run. This is a really really bad way of doing testing!

So if doing these things are so bad, how can we make it better?

How to make it work
I believe there are a few key elements that have emerged during the latest years that have made this kind of development possible. Sure, it was possible to do it earlier but it was way too cumbersome and time consuming.

Tools are a very important part of making it all work.

  • Unit testing tools
    First and foremost you need tools to run your tests. NUnit has its own GUI to run tests, but I really got hooked when I used TestDriven.net to launch the tests simply by right clicking the test method name right there in the code editor. It was so simple as easy to use. Nowadays I use Resharper and it has a nice integrated unit test runner that works almost the same. The point is that it should be fast and easy to launch one or more tests as you will be doing this over and over again during development. Having it integrated in your work environment is a big plus.
  • Continous integration tool
    This is very important. You must build the code and run the unit tests automatically as soon as possible after any change has been made. It is a bit of work to set up automated builds, but once it’s running you don’t have to do much because it will just work. And it pays off big time. Whenever something breaks you notice immediately. And you get confident that the specification is verified again and again even if it was several years since you wrote the code. For this to work, unit tests must not be skipped. They must be run continuously so that they can be verified.

    We are using Cruisecontrol.net for CI but choose one that fits you, just as long as it works!

  • Inversion of control / Dependency injection
    Aah, this is truly one of the keys to successful TDD. I read about Inversion of Control and Dependency Injection about four years ago when I found out about the Castle framework. However, at this time I didn’t really understand that it could help out in testing. Sure, it was a nice way of structuring an application but that was it, or so I thought. It was not until I coupled it with the concept of mocking that I realized how useful it could be.

    We are using the Windsor container from the Castle project for dependency injection as it has been around for many years and been through rigorous testing. It has worked well so far for us, but there are other choices such as Spring.net and Microsoft recently announced that they are building their own IoC container called Unity so there are plenty to choose from.

  • Mocking frameworks
    As a lazy developer you have just got to love mocking frameworks. When you write unit tests you want to isolate the unit of code you are going to test so that it has no external dependencies. To be able to do this easily you use dependency injection to inject fake objects into the real one. However, writing the fake objects is such a chore…

    Now mocking frameworks enter the scene and suddenly you don’t have to write any fake objects but instead you just dynamically generate them on the fly and simply specify what you want to return from the relevant methods. Quite nifty! Now I had the final key to unlocking the TDD heaven and get rid of everything that was cumbersome with unit testing in terms of configuration and setup.

    Having a good mocking framework is important and I believe that we haven’t seen the end of this by far. There are still some issues with the existing frameworks and none of them is perfect yet. I’m currently using Rhino Mocks because I really like that it is type safe. I’m also very fond of Moq as the syntax is really clean but I haven’t used it extensively yet.

  • Domain Driven Design
    The last piece of the puzzle came to me just a few months ago when I started reading more about domain driven design. The concept of separating the model from the infrastructure made such perfect sense that everything just fell into place. With this kind of architecture it would be easy to unit test and the system design would be very clean and simple. I just loved it right away, and from now on we are building and refactoring our current system towards this design.

Once you get excited about a new tool or technology there is always a chance that you see it as the "silver bullet" and try to apply it to any problem you face. You know, when you learn to use the hammer you start seeing "nails" everywhere. I don’t yet know if this will be the case with test driven development but so far it has worked very well for me.

As of today I’m actively working with unit tests as a great way of developing software. I’m not a hard core TDD fan, but I do use TDD when I feel it is the best way to move forward and switch to the classic style when I’m either comfortable writing code that is dead simple or working on integration parts that need to be tested in the real environment. Sometimes it is also hard to write clean cut unit tests since the .NET framework isn’t exactly test friendly with its sealed classes and lack of interfaces. I guess Java is better in that respect. But it’s nice to have the TDD tool in your tool belt, ready to crack the next nut when it comes up. 

TDD is a great method of designing and developing software iteratively and it is definitely here to stay.

Advertisement

Thermostatic – Humanizer

A new Thermostatic album is on the way! Here’s a teaser:

A child is born

This week on 8 April our second daughter was finally born! My wife was getting crazy with the big belly in the end so we were really happy when it got going. The delivery took quite a while I must say. Giving birth is a really painful business that can be compared to a 24 hour agonizing torture with a happy ending (usually). But nevertheless, everything went well in the end and the little girl got out safe and dandy and we’re now back home again…

I’ve added a picture of her to the right. Cute isn’t she? 🙂

I have taken a small break from my recent OS development adventure. I’ve been quite exhausted lately so my inspiration is lost. I’m sure it will be back soon so there’s no need to rush anything. For now I’m just enjoying some time off from work and taking care of the family.

Kernel high or low?

One question that I bumped into when starting in OS development is where to put the kernel in memory. You have to think about where things such as applications and data will end up so that they don’t collide with your precious kernel. After all, the kernel will be present at all times and you don’t want it to be in the way so where do you place it?

Well, the simple solution is the one that GRUB gives you which is to load it at the 1MB mark. In fact, GRUB won’t let you load anything below 1MB, probably because that area is littered with reserved areas for things such as the screen memory, BIOS data and so on. Loading the kernel at 1MB is also quite safe because PC computers of today most likely have more than 1MB of memory. (I remember my 486 had 16MB RAM and that was back in 1994 so it should be safe!)

Virtual address space
But PC’s have something called virtual memory mapping, which means that you can map virtual addresses to physical addresses. Even if the kernel is loaded at 1MB, we can "pretend" that it is placed elsewhere using this memory mapping technique. This is where the question of "high or low" comes into the picture. We can map the kernel to address zero if we want. Or we can put it high up at the 2GB mark. This is possible even if the machine does not actually have 2GB of memory, simply because we map virtual address space to physical ones.

Which solution is the best? Well, let’s say we put the kernel at address zero. Where can we put the applications then? We need to reserve some space for kernel data and book keeping records so maybe we can place programs at 512MB? This means that the kernel gets the first 512MB of address space and applications get the rest.

This would be fine if applications did not use static linking. Let me explain…

Relocation vs static linking
Static linking means that all addresses used in an application are "hard coded" to a certain base address. The application MUST be loaded to this base address or it will crash. This does not sound very smart, but it’s really convenient when you pair it with virtual addressing. All applications can get the same address range (without colliding) so it’s really easy to load them to a certain base address.

The alternative is to use a relocation table in the executable which can be used to move it to any base address. However, this comes at a performance penalty as you have to go through all address pointers and relocate them to the new base address. This technique is commonly used for dynamic libraries that are loaded by applications and may collide with other libraries in memory.

I remember that relocation tables were used on the Amiga because it did not have any virtual memory mapping (at least not the first models). In order to load many programs at the same time they had to be relocatable. It’s not that relocation is evil, but I guess static linking is used extensively for performance reasons nowadays.

The problem…and solution
So back to my question – put kernel high or low?

Well, if our kernel sits at address zero and we compile and link programs so that they run from the 512MB mark in memory we may eventually get into trouble. Why? Because we have no space for expanding the kernel. If our kernel needs to grow in size to 1GB we are in deep trouble because then all applications have to be recompiled to a new base address!

Clearly this is bad…We can either require all apps to use relocation tables and pay the penalty, or we can choose another way.

The solution that both Windows and Linux employs is to put the kernel high up where it won’t interfere with the apps. Windows puts the kernel at 2GB (or 3GB with a special registry hack). This gives applications a nice 0-2GB address space to play with without having to touch the sensitive kernel private parts. It also makes it easy to switch to a 64 bit architecture without having to recompile all programs. On a 64 bit system the kernel is simply recompiled at a new location way way high up in memory, and applications still live at address zero but have lots more space to play with.

Placing the kernel high is the way to go in my opinion. Not all hobby OS devs agree on this, but that’s the way I will go. In my next post I will explain how it is done, quite easily.

Moving to C++

I started writing my OS kernel a few weeks ago using a combination of assembler and C. This seemed to be the easiest way to get started, and it sure was. C is very simplistic and easy to compile and link. However, just recently I realized that I really miss the object oriented way of structuring the code. This gets apparent once you reach a certain size of the project where it it’s just not a few files anymore. You start thinking in terms of classes and namespaces and what not, and suddenly C is no longer that convenient. Sure, you can accomplish most object oriented stuff in C as well but it requires lots of discipline and manual bookkeeping. These things you get naturally in C++…

So today I set out to upgrade my project to use C++ instead. I will of course still use assembler for the low level functionality, but most of the kernel will be fully object oriented from now on.

Compiling C++
The problem with using C++ is that you need to take care of some additional dependencies that GCC requires. Normally this is satisfied with the standard libraries for the system, but as we effectively ARE the system we can’t rely on that. We need to implement everything on our own. No problems though, it’s not that hard.

After some fiddling and reading of guides, mainly the C++ guide in the OSDev wiki, I managed to get my new C++ kernel up and running. These are the GCC options I use for compilation:

gcc -Wall -O -fstrength-reduce -finline-functions -nostdinc -fno-builtin -fno-rtti -fno-exceptions -I./include -c main.cpp

These options tell GCC to do some optimizations (not too much though because we need the code to be safe and robust). I also disable some standard libs, turn off the run time type information and exceptions. This makes my code lean and mean, just like a kernel should be.

I also had to define a couple of functions that GCC needs for special situations:

/* Required to support pure virtual functions */
extern "C" void __cxa_pure_virtual()
{
    // This is called if a function could not be found
}
 
extern "C" void __gxx_personality_v0()
{
    // Called after an exception occured
}

The first one is called if you try to call a pure virtual function and it has not been implemented. In practice this should never happen unless there is something wrong with the compiler. In other words, you will get a compile time error, not a runtime one. But I guess GCC must be safe and not trust code blindly.

The second function is a bit of a mystery. I have specified that I don’t want any exception handling, but GCC still wants a function for unwinding the stack when an exception is thrown. I won’t be using try/catch so this should never be called either.

start.asm
My kernel is booted from a small piece of assembler code that takes care of setting up some basic structures and enable paging. I will go through it later to keep this post from getting too long.

However, what I wanted to point out is that I updated the startup code so that it initializes all static constructors and destructors as explained in the C++ bare bones article.

Linking
The last part that was a bit tricky was the linking. I’m not an expert in LD linker scripts, but I managed to put it all together and it works now. For those of you interested in how it looks, here it is in full:

OUTPUT_FORMAT("binary")
virt = 0xC0100000;
phys = 0x00100000;
SECTIONS
{
  .text virt : AT(phys) {
    codevirt = .;
    code = LOADADDR(.text);
    *(.text)
    *(.gnu.linkonce.t.*)    /* C++ classes */
    *(.rodata*)
    *(.gnu.linkonce.r.*)    /* Read only (not used?) */
    . = ALIGN(4096);
  }
 
  .data : AT(phys + (datavirt - codevirt))
  {
    datavirt = .;
    *(.data)
    
    start_ctors = .;
    *(.ctor*)
    end_ctors = .;
    
    start_dtors = .;
    *(.dtor*)
    end_dtors = .;
 
    *(.eh_frame)
    . = ALIGN(4096);
  }
 
  .bss : AT(phys + (bssvirt - codevirt))
  {
    bssvirt = .;
    bss = LOADADDR(.bss);
    *(.bss)
    *(COMMON)
    *(.gnu.linkonce.b.*)
    /*. = ALIGN(4096);*/
  }
 
  endvirt = .;
  end = LOADADDR(.bss) + (endvirt - bssvirt);
}

I will go through all this in a later post, when I explain my kernel memory layout.

Booting with GRUB

The first thing an operating system does is to boot the kernel using a boot loader. When you develop your own OS you have two options. Either you write your own boot loader or you use one of the existing ones available.

I’ve decided to use the GRUB boot loader to launch my OS kernel. Although it would definitely be an interesting task to write my own boot loader, I think it’s a bit of a time sink to build something that works on all types of PC BIOS out there and is also capable of booting several different operating systems. Someday I may do it, but not today. 🙂

Boot image
GRUB needs to be installed in the boot sector of the device you are booting from. This can be for example your hard drive, a CD or a floppy disc. Installing a custom boot sector can be a bit of a chore if you run on Windows because you need third party tools. Instead of going this route I snatched a pre-made GRUB image from the web of a FAT formatted disk with GRUB in the boot sector.

Now I could either install this image on a floppy disc but it would be slow and painful to run things from a real physical floppy drive, so I installed an excellent tool called Virtual Floppy Drive (VFD) to get a virtual drive instead. With this tool I can mount the image and read/write to it like it was a normal disc. Bochs can also boot from it without any problems.

Bochs
I have chosen to mount my image as a B: drive (as I already have an A: drive) using the following .bat file:

D:\Projects\Tools\VFD\vfd.exe install
D:\Projects\Tools\VFD\vfd.exe start
D:\Projects\Tools\VFD\vfd.exe link B
D:\Projects\Tools\VFD\vfd.exe open "D:\Projects\Current\PontOS\bootimage\PontOS.img"

(On Vista you need to run this as administrator.)

Further on you need to modify the Bochs config to boot from drive B: like this:

floppya: 1_44=b:, status=inserted
boot: floppy

Configure GRUB
GRUB has a lot of functionality and I won’t go through it all. However, the first thing to note is that it is split up in more than one part. The boot sector is only 512 bytes and just contains a small loader that loads the rest of GRUB. This is placed under \boot\grub\ on the floppy disc in the files "stage1" and "stage2". There is also a file called "menu.lst" there which contains the menu alternatives that GRUB will show when booting the machine.

title   PontOS
        root    (fd0)
        kernel  /kernel.bin

This is what "menu.lst" contains for my OS right now. The first line is the text to show for the menu alternative and then there are a couple of lines telling GRUB what to do. "root" tells it what drive to boot from, which is the first floppy drive (I have configured Bochs so that floppy 0 is the B: drive on my computer). The next command is "kernel" which makes GRUB load and launch a kernel boot image which can be found at "/kernel.bin", i.e. in the root directory of the disc.

All I have to do now is to copy my compiled kernel to the root of the floppy and launch Bochs and it will boot the OS. In fact, I’ve put the copy command in my build script so that it always puts it on the virtual drive ready for testing.

Simple printf

Today has been a busy day so not much coding has happened in PontOS. However, I implemented my own version of a printf() function for printing strings with formatting codes. This is highly useful for debugging purposes. To be able to write out numbers in decimal and hexadecimal formats I also had to implement an "integer to string" function. It made me feel really nostalgic, as it reminded me of the numerous implementations I made in assembler on Amiga and PC when I was an über nerdy teenager… 🙂

Tools for OS development

So what kind of tools do you use when writing an operating system?

Well, before I delve deeper into that discussion let me talk about where I come from tech-wise. I admit that I’m a Microsoft junkie. I’ve been using Microsoft tools and technologies since back in 1995 when I wrote my first C++ programs for Windows. Before that I used TASM/TLINK and wrote everything in x86 assembler in a primitive DOS environment and compiled it all with a good old BAT file…

I really like Visual Studio and would love to use it for OS development. After all, you get quite spoiled with the feature rich IDE, integrated debugging, intellisense and what not, but unfortunately it doesn’t cut the mustard when it comes to OS development. At least not the compiler/linker that comes with it. It is simply not suited for that task since it makes too many assumptions about what to include in the final executable. When you create your own OS there is no runtime library to rely on. You have to write all code yourself, including primitive functions such as memcpy() and strlen().

Instead I had to make a journey back in time to my old DOS days and the BAT file. This time however, instead of using TASM/TLINK, I picked out some tools based on advice from the OS dev community.

Compiler and linker
First of all you need a good compiler and linker toolset. I decided early on to develop in C since it is a high level language that still gives you raw access to low level structures and memory pointers. C++ is a viable alternative that I may investigate later on, but for now I use C.

GCC is a widely used C/C++ compiler with plenty of settings. It can produce highly optimized, tight code without unnecessary junk tying it to a specific OS.

Along with GCC I use the LD linker which takes the object files that the compiler generates and puts it all together in one executable file. The good thing about this linker is that it can create many different types of executables and even raw binary images which is very handy when starting out in kernel development.

These tools are readily available on Unix systems but what about Windows? Well, you can either use Cygwin but this seemed quite messy to me. I’m not interested in getting a Unix shell in Windows…Instead I believe the best option is to use DJGPP which is a collection of Unix tools that have been ported to Windows. This way you get both GCC and LD and some other handy utilities that work perfectly from the Windows command line. You can choose exactly which tools you need from the DJGPP toolset and installing is just a matter of unpacking the archives and updating the PATH environment variable.

I should also warn you not to use MinGW. This was the first choice I evaluated but it turned out to be a real mess with GCC and LD not supporting all the stuff I needed to compile and link correctly.

Assembler
When you write an OS kernel you will inevitably need an assembler. Some things simply cannot be done in C because you need precise control over the stack and CPU registers. It is also more convenient to write longer pieces of code in a dedicated assembler file instead of writing messy inline assembler in C which is a total pain.

The GNU toolset contains an assembler (GAS) but I don’t like its weird syntax. Instead I have chosen NASM which has a similar syntax to TASM that I used before. It is open source and works really well.

Emulator
Next up you need an emulator to run the OS in (yeah, you didn’t think I was going to keep rebooting my own system over and over again a million times while testing did you?)

There are quite a few emulators to choose among so you can just pick one that suits you. However, you should think about whether you need an emulator or a virtualization software. An emulator can emulate things that your current computer does not have, like multi core CPUs and exotic hardware, while virtualization programs often just simulate a "copy" of your current hardware (and thus it can run many many times faster as well).

I currently use the Bochs emulator because it seems to be stable and works well. I have also successfully run my kernel in Microsoft Virtual PC. I tried to use QEMU but it just crashed for me…

IDE
Now it would be really nice to suggest a great development IDE here, but the fact is that I haven’t found any good one yet. I tried out Dev-C++ but it is too tied up to MinGW for my taste. And besides it was kinda quirky when editing code because it kept screwing up the indentation. I could probably work around the MinGW dependency in Dev-C++ but I rather try to do that with Visual Studio instead, i.e. have VS run a custom make tool.

In other words, right now I’m just using Notepad++ for editing and compile all source using a makefile. Although Notepad++ is a great editor, it’s not a development IDE. I feel I definitely need to improve this work environment as the project grows bigger…

Where’s the kernel, colonel?

This rainy Sunday I’ve been spending some time taking the first baby steps for my brand new project: PontOS!

I now have an extremely simple kernel working and running in the Bochs emulator.

Here are a couple of screenshots taken from Bochs:

As you can see I’ve used GRUB as the boot loader which is common among Linux systems. I decided quite early on that it would be a wise choice since it is very flexible and supports many features. I will maybe write my own boot loader someday just for educational purposes, but for now this works just fine.

GRUB loads and launches my kernel image which performs some initialization and prints out a simple text message. It took quite a while to get this up and running but my goal for today is accomplished! Stay tuned if you want to know more about how it works.

Riding on the waves of inspiration!