Kerneltrap has spoken with Matthew Dillon, a well-known FreeBSD kernel hacker. He has recently been in the spotlight due to many impressive NFS related bug fixes, as well as fixes to the TCP stack. In this interview he talks about these bug fixes as well as his history with computers, programming and FreeBSD. He also discusses Linux, open source, embedded systems, the Amiga (and his DICE C compiler), and much more.
Please share a little about yourself and your background...
Matthew Dillon: I was born in the Bay Area and have lived in Berkeley most of my life, with a minor 10-year diversion to the Lake Tahoe area. I really did mean to go somewhere else for College but the UCB Engineering school unexpectedly accepted me (way back in 1985) and that pretty much sealed it. So I am a happy graduate in EECS with a focus on CS, and have B.S. (I joke that it's all B.S., which it is for someone like me, but despite my own experience I strongly believe that a college education is extremely important no matter how smart you are. You make a lot of friends in College). I've always meant to go back and get a masters in something (other then CS), and I still might.
In any case, throughout this early period I got involved with the Commodore PET (8th grade), then the Amiga (late high school). I learned 6502 machine code (hex codes, not assembly) on the PET which led to my writing an assembler and editor (in 6502 hex) for the PET, after which I wrote most programs in assembly. I learned C on both the Amiga and in B50-Evans at Berkeley. By the time I took a course in C I already knew it. In any case, this interest in C eventually led to the writing of the DICE C compiler for the Amiga, which I did because I thought Lattice C was too expensive for many Amiga programmers. I sold DICE as shareware and it quite unexpectedly generated a fair chunk of income. This allowed me to expand into later Amiga models (A3000) as well as put together some fairly souped up PC's (for the times), on which I ran Linux. From late high school through college and beyond I also worked for a small engineering company in Truckee. I wound up doing all of the digital design work and software for several telemetry systems installed in the area as well as a number of other projects. We did everything from single-chip boards to fairly sophisticated memory-managed 68000 boards and everything from digital microprocessor systems to analog/radio systems, power supplies, and so forth. We even did a 600-gate ASIC (chip design) once, which was a lot of fun.
Upon returning to the Bay Area in 1994ish I started a small ISP called BEST Internet with a couple of friends. This rapidly progressed, acquiring TLG (The Little Garden) in later years, merging with a pure web hosting provider called HiWay Technologies, and eventually sold to Verio at which point I and the other founders decided it was time to do something else. The finish occurred at the height of Internet Mania so we all did quite well. After that I took a year off, then started another startup with my brother. We did a very mundane-sounding billing system which was actually very sophisticated, but our timing couldn't be worse and we wound up having to idle the company. I still hope to adapt some of the database technology we developed for Backplane Inc for open source use.
At the moment I am focusing on developing the Backplane technology for open-source use, working on FreeBSD, and planning the vacation that I couldn't take the last 7 years.
JA: How is this database technology unique? Where in the open source world would it be useful?
Matt Dillon: The core of the database is a quorum-based peer-to-peer replication system that is able to maintain transactional integrity across all peers and snapshots. The database itself uses a very basic SQL command set (similar to the original MySQL command set). The core replication features have a large number of potential applications including distributed web serving, distributed filesystems, and so forth. That is the piece I hope to adapt.
JA: I remember using DICE C back when I owned an Amiga 2500 in college. Writing a complete C compiler is an impressive task; how much effort was involved, and how well did it perform? What happened to the DICE code? Finally, could it be used to compile something as complex as the FreeBSD kernel?
Matt Dillon: No, it can't compile the FreeBSD kernel. It doesn't implement GCC's __asm__ macros and it doesn't implement inlines.
DICE was turned into freeware. You can find a link to it on my home site. It will compile under UNIX but, of course, it produces 68000 code. It does not really produce floating point code, though. It is fully self contained and can compile, assemble, and link code into Amiga-style binaries and it can also produce ROMable code. In fact, I used DICE quite extensively for some of the telemetry projects while working up in Tahoe.
DICE does a fair number of optimizations but nothing compared to what GCC can do. It still performs quite well though it can only produce 68000 output.
There is a myth flying around that I wrote DICE in a week. I will say with absolute authority that it actually took *TWO* weeks to write! Ok, I'm kidding. Actually I was able to write the preprocessor (dcpp) and the compiler core in about two weeks, but I still had to use the Lattice assembler and linker and, really, after only two weeks worth of work about the only thing I could compile was a 'hello world' program. But this was enough to convince me that completing the compiler was possible, and in later weeks I beefed up the compiler core, wrote my own native linker and assembler and eventually started selling DICE as shareware. The assembler took about a week to write. The linker took about a day, though I later decided to really do it right and make it collapse sections properly to support my auto-init stuff and that took a little longer. The preprocessor, dcpp, took about a week.
Of course there were lots of bugs. DICE took many months to refine into what I would consider a 'commercial quality' system. I started selling it as shareware and eventually formed a small company called Obvious Implementations Corporation with a couple of friends (people will remember Bryce Nesbit and John Toebes) to sell it commercially. This was circa 1992. But as time progressed the Amiga started to decline due to Commodore's crash and burn and the increased processing power in PCs. It did well enough as a side project but eventually it ran its course and we decided to give the source away.
JA: You mentioned earlier that you're planning a vacation. Where do you intend to go?
Matt Dillon: I don't know yet. Somewhere warm.
JA: How and when did you get started with FreeBSD?
Matt Dillon: I got into BSD (the original CSRG BSD) starting in my last year of high school. A good friend of mine was taking a number of UCB courses and that gave me access to B50-Evans. I instantly saw the merit of UNIX and of memory management. I was very much into C programming during my Berkeley years but only did a little kernel hacking. Most of my programming during that period was Amiga-oriented. There was a Perkin-Elmer in Cory Hall we had access to and I rewrote the serial driver to use the PE's microcoded DMA (which still required a real interrupt before it would do anything) to improve SLIP performance between it and B50-Evans. I also found a couple of kernel bugs, like one in flock() which crashed the main CS account machine, cory.berkeley.edu twice before I realized that it was *my* program causing the crash :-). I knew Phil Lapsley, and made many friends (quite a few of which are still involved with FreeBSD), but I never really got involved with CSRG. I didn't actually meet icons like Kirk McKusick till well after my college years.
In any case, after Berkeley I moved up to Tahoe where the small engineering company I worked for was located, and FreeBSD didn't even exist then, not really. This was around the time of the AT&T lawsuit. I fell in with Linux and did a bunch of work on the linux kernel, but once FreeBSD really got going I slipped back into the fold and began using it. Simply put, FreeBSD *was* BSD and I just had to go back to the UNIX I knew and loved from UCB. I always laugh when people call FreeBSD a 'Free UNIX alternative'. From my point of view, it *IS* BSD, and it *IS* UNIX, period. As is NetBSD and OpenBSD.
When we started BEST Internet we ran both BSDI and FreeBSD, and other things (I'm not going to talk about the IRIX snafu). Eventually it became clear that FreeBSD was the only way to go, especially considering my penchant for tracking down and fixing OS bugs. My focus was to keep BEST's shell machines, with 2000 user accounts per machine, up and running for as long as possible and that meant finding and fixing bugs as they came up. I also had access to a great deal of hardware in later years at BEST (in the 1996+ timeframe) which put me in a unique position to ferret problems out and fix them. I found bugs in the VM system, in quotas, in the filesystem code, and many other places during those years.
This work eventually led to my getting what we call 'the commit bit', which allowed me to commit directly to the FreeBSD CVS tree. With this bit in hand I decided to 'fix' the VM system (by rewriting half of it), which turned me into a Terror from the point of view of some of the old-boys in the tree who became alarmed at the rate I began committing changes. I think part of it was simply that many of them didn't understand the VM system and therefore didn't understand what I was doing, but since they weren't really interested in fixing the VM system themselves and I was getting massive support from the end-user community, they didn't have much of a choice. This was a time when the VM system had a huge vacuum due to the departure of John Dyson. The only other person doing significant work on it was David Greenman, and he was already burning out after making hundreds of changes to John's in-progress work in order to get the system to not crash. These were the days of 2.2.X.
This led to an overreaction by core which (by my account anyway) led to some rather draconian rules in an attempt to slow me down? I dunno, I'm sure others have a completely different viewpoint. Suffice it to say that there was a lot of friction during this time and I even lost my commit bit for a number of months due to it. It didn't stick, of course, it just made DG's job harder because now he or Alan Cox (not the Linux Alan Cox, our Alan Cox) had to commit my submissions themselves and I was still fixing bugs at an insane rate. Eventually my work proved out both in later 3.x releases and in the FreeBSD-4.0 release, and I became known as one of the VM Gurus.
JA: Your impressive record of successful fixes and improvements stand on their own. On the other side, have you made any noticeable mistakes during these
Matt Dillon: Oh sure, everyone makes mistakes! Occassionally I will break B while fixing A, usually due to an incomplete understanding the code in question due to lack of documentation (John Dyson never commented his code), but also due to the original code not working as advertised in the first place.
I always document code as I work on it, to make it easier both for me and for anyone else working on the system, and I am not shy about putting assertions in the code for conditions that are supposed to be true. I would much rather hit the assertion and panic early then allow an incorrect assumption to slowly corrupt the system. I started doing this in the 4.X codebase and it greatly contributed to our famed stability in 4.0 and later releases.
Introduced instabilities, either due to bugs or purposeful assertions, typically lasted no more then a few days. The result of this has had a long term stabilizing effect on the codebase. Even now if someone breaks something horribly in the system there's a good chance their breakage will be noticed quickly due to assertions I and others have strewn all over the VM system. Assertions are good.
Sometimes my 'fixes' are misinterpreted as mistakes. This contributed to some of the friction I had with older developers circa 1998. The most noteable example of this is the VM Page cache. The cache contains several page queues including a 'cache' queue which is only supposed to contain clean pages. The system is allowed to free pages in the cache queue at any time, so a dirty page in this queue could lead to a loss of data. People had noticed that, in fact, dirty pages could wind up in the cache queue. Instead of fixing the problem they instead applied a bandaid in one of the code paths where they noticed the case and then proceeded to move the page out of the queue. This led to at least three bugs in the VM system going unnoticed (or being noticed but not being traceable) for over a year. When I came across this piece of code I ripped it out without a second thought and then added an assertion to panic the system if a dirty page was ever added to or found in the cache queue. The result was about two weeks of system instability in the development branch during which I found and fixed 3 serious bugs exposed by the assertion, and we've never had a problem with that particular area of the system since. This practice of asserting conditions as a reality check against a documented algorithm is now standard practice in FreeBSD.
This is why I hate bandaids. A bandaid, in the long term, only adds to the instability of a system. The correct solution is to make the code do what it is supposed to do and assert (panic the system) if it does something it isn't supposed to do. You might get a few panics in the short term, but in the long term you solve the problem. Permanently. Bandaids have the effect of causing problems to return and haunt you, sometimes for years. The dirty-cache-page bug was in the system for at least 3 years because of a bandaid.
JA: I've been following your recent efforts at debugging NFS with much interest. Your turn around on finding bugs and then fixing them has been quite impressive. Can you talk a little about the tools you've used, as well as the bugs found and the fixes applied?
Matt Dillon: I'll handle the tools in an answer to a later question. I sort of became an NFS guru by accident. Having used NFS quite heavily at BEST and even more heavily at home I tended to come across bugs, and being the person I am I just had to fix them. What really got me started was a desire to be able to compile not only the FreeBSD source tree via NFS, but to also be able to place the object tree (R+W) under NFS and do an install over NFS. Over the years various programs, from the linker to 'cp', have started using mmap() and there were dozens of bugs in our VM, buffer cache, and NFS code related to mmap(). At the same time many other programs were beginning to use NFS. NFS bugs became an endemic problem in everyone's implementation, not just ours.
Easily 80% of the bugs in NFS are related to mmap() to some degree. The basic problem we face is actually a feature: we have a unified buffer cache and instantiating a new buffer which already has 'some' of its pages in the VM Page cache is a twisted, difficult process. Add to this specific problems with NFS - such as having to do piecemeal writes to the server (rather then doing full-block-sized writes), integrating the two-phase commit into the buffer cache, and things like file truncation, and there is a lot of room for error. This most recent round of bug fixes have been mostly due to problems in the handling of file truncation.
JA: How stable do you now feel NFS is, with your recent bug fixes?
Matt Dillon: The FreeBSD NFS implementation has always been quite stable. The more bugs we fix, the more stable it gets. I feel our NFS implementation to be the best of all the open-source operating systems and, except for the lack of POSIX locking between clients, very close to commercial offerings from Sun (for NFSv3 anyway) and NetApp. In the last few years we have been in a bug race against programs using mmap(). In 1995 very few programs used mmap() extensively. In 1998 there were a few more (which prompted much of my original work).... NFS was considered stable after that round mainly because it seemed to work just fine with the programs in regular use at the time. Today mmap() is used almost universally to access files and is often combined with ftruncate() and read()/write() operations, leading to more bug fixes.
JA: Can you explain more about the unified buffer cache, and how this is both a feature and a problem?
Matt Dillon: Sure. Lets say you are mmap()ing a file and you are modifying the file both through lseek()/write() calls and through the mmap. The buffer cache collects data accessed through read() and write() calls while the VM Page cache deals with data that has been mmap()'d. If the buffer cache is not unified you wind up with two situations: First, the file data may be cached twice, wasting memory. Second, modifying the mmap() does not necessarily make the data available to a read() call and modifying the data with a write() does not necessarily make it appear in the mmap().
To unify the buffer cache we essentially replace the backing store for the data buffers with the VM pages from the VM Page cache directly, rather then allocate the backing store for the data buffers separately. In otherwords, the data buffer winds up using the same physical memory as the VM Page cache.
There are a couple of problems with this. For one thing, when we need to write a buffer out to its physical media we have to 'freeze' the VM Pages backing it to prevent a userland program from modifying the pages *while* the I/O (the DMA) is in progress. For another, a buffer cache buffer is typically larger then a VM page, so it is possible for the system to have only some of the VM pages required to back a buffer requested by the filesystem. To make the buffer valid we have to read the missing data from disk. Add to that the possibility that some of the VM pages may be dirty (we don't want to blow away the dirty data when making the rest of the buffer valid!) and the result is a fairly complex set of interactions between the filesystem/buffer-cache code (what we call VFS/BIO), and the VM system.
Additionally, some devices operate with a smaller block size (e.g. 512 bytes), but user-mapped VM pages only understand the concept of a 4K boundry. For example, in UFS a filesystem 'fragment' may be stored on disk in a 1K block, with other garbage in the other 3K (fragments associated with other unrelated files). If the user maps this file into memory with mmap() we cannot directly map the disk block to a VM Page but must instead extract the fragment out of the disk block, zero the remainder, and give it its own private VM Page. We have to do this in face of filesystem operations such as ftruncate() and write/append, which may extend the physical fragment. It can get quite nasty, especially when you throw NFS (with its piecemeal writes) into the pile. Our NFS performs well in part because we've tackled the difficult task of caching piecemeal writes (e.g. somebody writes 10 bytes at offset 552 in the file) in our unified buffer cache.
This is why the buffer cache has caused us so many problems over the years. The feature... what we get out of it... is data consistency between mmap() and read()/write(), no duplication of data in the system caches, and the ability to cache far more data. Our buffer cache is limited due to having to map the data into kernel memory. Therefore it cannot grow much larger then a few hundred megabytes. But the VM Page cache is universal... it covers all of memory. By unifying the buffer cache and VM Page cache and placing the actual cached data in the VM Page cache we can effectively use all available free memory in the system for caching purposes.
Linux is tackling many of the same issues in their unified buffer cache that we had to tackle in ours. The problem seems deceptively simple but winds up being far more complex.
So one might ask, why wasn't it designed correctly in the first place? Good question! I wasn't around when it was designed but the VM system FreeBSD inherited from CSRG was from 4.4-lite. 4.4-lite's VM system was essentially the Mach VM system, which at the time was full of bugs. John Dyson and David Greenman did a lot of work on early FreeBSD kernels (circa 1995), including unifying the buffer cache with the VM system, but John left before the bugs got worked out, and DG was overloaded. I started working on the Buffer cache and VM system in 1998.
JA: How much of a difference should we expect between the VM of the 4.5 Release compared to the future 5.0 release?
Matt Dillon: Very little to begin with. The VM touches so many parts of the system that multi-threading it is very difficult. Alfred Perlstein tried a while back but it led to massive instabilities in the system. We will try again, but I personally feel that it is better to multi-thread the I/O subsystem first and leave the VM system till last. In my opinion, multi-threading the I/O subsystem will lead to the greatest SMP performance gains.
JA: The 5.0 release is still quite a ways off. However, what are the highlights you're looking forward to?
Matt Dillon: The coolest feature of 5.0 is going to be Julian's KSEs -- basically a totally new way of doing userland threading which combines the best of both worlds: The ability for the userland to switch threads without having to drop into the kernel, and the ability for the kernel to detach kernel stack contexts associated with blocked userland threads on the fly. We will theoretically be able to run massively multi-threaded programs with very little overhead.
JA: In addition to your recent NFS fixes, you've also made some major changes/fixes to the TCP networking stack. What exactly has been changed, and how will it affect the end user in the upcoming 4.5 release?
Matt Dillon: Most of my TCP work has been in the form of relatively simple bug fixes, consulting on the lists, and testing other people's fixes. It is just happenstance that the last couple of minor bugs I've fixed turned out to have major performance implications. For example, we were artificially limiting the maximum number of in-transit packets to 4 in new-reno, which artifically limited bandwidth on longer-haul links. I simply removed that limit. Well, it sounds simple, but actually finding the bug took several hours examining TCP traces. Another example of a fix was to propogate the TCP_NODELAY option from a listen socket to its accepted connections. While this is not strictly required the Linux-oriented 'tbench' program assumed it so our transaction benchmark results were terrible in published comparisons. This minor fix drastically improved our benchmark results.
JA: What other contributions have you made to FreeBSD?
Matt Dillon: Too many to repeat or even remember! One of the things I really try to do is to help people on the lists. This trend started on the USENET groups and then moved onto the FreeBSD mailing lists. Sometimes I pile on more work then I really should, but I have always felt that, ultimately, it is our end-users that make us a success. I can be hard on people who I believe ought to know better, but I'm very supportive to most people. FreeBSD's developers have an unfortunate reputation of being too self-involved and uninterested in the plight of less sophisticated users. The truth is that only a small minority are like that. Most of us will genuinely try to help. I believe that I can bring a great deal of form to our relationship with end-users due to my experience relating to my DICE users - several thousand people in its heyday, all asking me questions!
JA: What main tools do you use when developing?
Matt Dillon: What I have right now is a number of uprights and about half a dozen DELL 2U rack mount servers (2450's and 2550's 2xCPU each). Some of them I own, some are from the (idled) Backplane Inc. Most of the machines are SCSI. I have a machine room in my house and I distribute everything with 100BaseTX switches plus some wireless. My workstation is located in the study (the machine room is *very* noisy!) and a T1 for internet connectivity. I host a number of friend's domains and email boxes as well as my own. All the machines run FreeBSD except one of the smaller uprights, which runs windows for game playing.
Three of the DELL boxes are dedicated to testing. Bill Paul used two of them to test his BGE gigabit driver, for example. I have -stable on two of the boxes and -current on the third. With some help from Yahoo I have maxed out the memory (4G) in one of the -stable machines for testing purposes. I can do remote-gdb sessions as well as serial console operation using a null-modem cable, allowing me to work from the study. Most of the time I try to get a machine to crash and produce a kernel core which I can then debug after the fact while simultaniously rebooting the test box into a new test kernel.
I use the test boxes to do things like 'make buildworld' loops, to run various benchmark and filesystem testing utilities such as the NFS tester Jordan posted to the lists recently, to test patches, and to try to reproduce bugs. Bug reproduction is what I try for most of the time. If I can reproduce someone's bug in my test environment then I can usually fix it very quickly, which is one reason why the NFS bugs were fixed so quickly. I can test just about anything (except IDE stuff, since most of my boxes are SCSI). I can test WAN performance over the T1, Gigabit performance between two of the 2550's, switched ethernet performance, throughput, packet loss, NFS, build loops, filesystems... just about anything. I'm a computer nut so I have a lot of machines, but for any developer it is always a good idea to have at least one 'test' machine that can be crashed, rebooted, and reloaded at-will.
(Believe me, it didn't used to be this way. I've always had one 'good' machine, decked out to the gills, but until recently I didn't have the funds to put together several).
My build environment is on my main non-test machine, Apollo, which runs -stable. I do all my kernel build and buildworld's for both -stable and -current on this one machine and then install them on the test boxes via NFS. This way I always have complete access to the environment running on the test boxes even when the test boxes are crashed out, and I don't have to worry about losing active work.
JA: Do you use other operating systems besides FreeBSD?
Matt Dillon: I use a number of operating systems on and off. I still have my Amiga! But at the moment I am really only running FreeBSD on active machines (the Windoze box doesn't count, it's just a game machine).
JA: Do you still use your Amiga?
Matt Dillon: We (I still do occassional consulting for Sandel-Avery Engineering up in Tahoe) still have a couple of live telemetry systems based around the A3000 (hopefully not for too much longer though), so I keep mine just-in-case we need to perform emergency maintainance. I turn it on every once in a while but I don't do much programming on it any more because, well, it is so slow and has such a tiny screen :-)
JA: How much, if any, do you keep up with Linux development?
Matt Dillon: I read linux-kernel passively and occassionally converse with various Linux people, including Linus, on issues that effect both OS's (such as SMP locking issues and VM). In Linux's early days I did a considerable amount of work on the TCP stack, and also wrote a cron replacement ('dcron'), but my love has always been with BSD, due in part to having gone to UC Berkeley, so most of my focus these days is on improving FreeBSD's reliability.
JA: Are any of your efforts still found in the current 2.4 Linux kernel?
Matt Dillon: I don't know. I believe they've rewritten the TCP stack several times since my work so I doubt there is much left of it in 2.4.
JA: Do you keep up with NetBSD and OpenBSD development?
Matt Dillon: Well, in a perfect world there would be only one. That said, we don't live in a perfect world. Open-source programmers are free to work on whatever project they want, including splitting off their own version (and this applies to Linux as much as it applies to FreeBSD, by the way!). It isn't really anarchy, it is simply the law of supply, demand, and personal interest.
One thing to keep in mind is that FreeBSD, NetBSD, and OpenBSD feed off each other. When one distribution develops something good, like the new dirpref algorithm and openssh, the others pick it up. We picked both those items up and we also dish out a number of things too, such as filesystem fixes, softupdates, and our ports system. What duplicated work there is (mainly in the kernel core) simply serves to give us multiple test environments to test things on. If one project does something that makes a big difference, believe me the others hear about it and often adopt the code in question!
I would love to see a merge or a partial merge of the three source bases. I believe that it would be possible to merge the non-kernel tree (/bin, /usr/bin, etc....) back into a single CVS hierarchy. The kernel probably could not be merged though - each BSD kernel has its strengths and weaknesses. All three distributions started with the severely broken 4.4-Lite (Mach) VM (essentially), and all three diverged in regards to fixing it.
JA: Who are some people that you admire, in the computer world or otherwise?
Matt Dillon: Hmm. That is a hard one. I admire a lot of people. Anyone who selflessly works towards making other people's lives better, or who expands our knowledge of the universe, or who builds a life out of nothing. I admire Linus Torvalds for not going mad from all the raving lunatics who follow his every word as if it meant something really significant (Linus will be the first person to tell you that it doesn't). I admire Neil Armstrong (and anyone who has gone into space). I admire rescue personel. I admire teachers (who we never pay enough, shame on us!). I admire the EFF for defending our community even if I don't agree with most of their stands on issues, and of all the charities in the world the one I give to every year without fail is
Doctors Without Borders because I believe the first order of business in the world is to create a basic standard of living for all humanity and too many other charities operate as bandaids instead of making real, long-term improvements in society.
JA: Have you met other prominent FreeBSD hackers, outside of email?
Matt Dillon: Oh yes, there are a number of conferences with a prominent FreeBSD hacker presence. We have a BSD-specific conference now, and of course there is always USENIX. There are also monthly BAFUG meetings (I am rather sporadic going to those though).
JA: FreeBSD doesn't get as much press as Linux, and hence many people have not heard of the more notable FreeBSD hackers. Who are some of these people?
Matt Dillon: Jordan Hubbard and Kirk McKusick are the most well known. There are literally dozens of noteables and if I list them and forget one I'll never hear the end of it!
JA: Have you done any recent work with embedded systems?
Matt Dillon: Our later telemetry systems have base stations based on UNIX (FreeBSD), and the telemetry units themselves are embedded systems based around the 68000/683xx/68xx, with a couple of megs of flash and ram and running an operating system I wrote and compiled with DICE. I still do a considerable amount of work on these systems, usually adding features to the base station or doing standard maintainance. The telemetry units are responsible for the pump controls and alarm handling and have never crashed. Some have uptimes of over 3 years (the last time I flashed the OS). They all have battery backups but failures do occassionally occur due to lightning or an extended power outage (this is at Lake Tahoe so you can imagine the hell these systems go through. One was actually submerged for 5 days in fresh(ish) water and stopped working only after some of the IC's power leads corroded due to the current/water/mineral interaction!).
I haven't done a whole lot in the last year or two, my interests have definitely gravitated more towards software. But I still love to fiddle.
JA: What do you enjoy doing when you're not hacking on FreeBSD?
Matt Dillon: I am a long time skier (water and snow) and converted to snowboarding a few years ago (we call it 'going over to the dark side'). I changed mainly because after 20 years of skiing it just wasn't fun any more (read: to make it fun required making it a little too dangerous). Snowboarding is a whole lot of fun even at slower speeds! I am also an avid bicyclist and sailor. In fact, right at this moment I am building a rowing bow.
JA: What suggestions can you offer someone who's just beginning to look at the FreeBSD kernel code?
Matt Dillon: Learn how to configure and build kernels, learn how to use DDB (console debugger), how to generate kernel cores, gdb them, how to gdb a live kernel, and just plain start playing around. Documentation is difficult to come by but a number of publications (for example, DaemonNews, www.daemonnews.org) have archives with articles by many developers which put various parts of the system into perspective.
JA: Is there anything else you'd like to add?
Matt Dillon: Well, I could say something about open-source in general. Specifically I would like to say something about open-source and making money. There are two kinds of open-source programmers in the world. No, make that three kinds: There is the open-source programmer who is still in school, the open-source programmer who has a real job, and the open-source programmer who tries to make a living out of his open-source programming.
In many respects, each individual goes through ALL of the above phases. We've all been in (or are in) school, we all must eventually make a living, and having been somewhat disillusioned by real work we have all either tried or will try to make a living from our open-source endevours. This last item -- making a living from open-source, has been over-stressed by the open source community (mainly Linux related developers) over the last few years. Guys, if you haven't figured it out by now it is mostly an illusion! The hype made it possible. The crazy stock market made it possible, but it didn't last now did it? If I take a hundred people I know only two or three can make a living from their open-source work (and I'm not one of them today!).
The open-source community has to come to terms with this. Don't let it get you down! I read LWN.NET (Linux Weekly News) every week and I see a definite trend towards mass depression as the internet craze settles down into something a bit more sustainable. Don't let it get to you! Face the issue squarely and come to terms with what it means for your own work. If an older generation (that's me! At 35! God I feel old!) can teach the younger generation of programmers/hackers anything it is that the character of open-source will always be with us, with or without wall-street, and that we open-source programmers do not do these things for a 5-minute spot on CNN, we do these things because they are cool, and interesting, and make the world a better place for everyone. That is our legacy. We are not an anarchy, we are a charity. A very *LARGE* charity I might add!
JA: Thank you very much for all your time! Your answers have been quite enlightening.