Timothy Miller is a long time developer of graphics chips and drivers. He has observed that there is a growing trend by graphics hardware vendors to provide less and less information to free and open source operating system developers. Without this information, it is becoming more and more difficult to purchase new graphics hardware that is stable and reliable on Linux and other free and open source operating systems. In response, Timothy worked with his employer, Tech Source, to form the Open Graphics Project.
The Open Graphics Project is a collaboration between the Free and Open Source Software (FOSS) Community and Tech Source Inc. to develop new 3D graphics products that are compatible with Free Software, both philosophically and practically. The project is currently designing an "open source friendly graphics card" which will offer quality 3D and 2D acceleration with an impressive feature set at an affordable price, aiming for availability as early as June of 2005. Though the project was only started in October of 2004, it has already released the card's specifications, a design document, and a software model for early testing and driver development. In this interview, Timothy provides a wealth of information about the project and its current status, highlights contributions needed from the free and open source community, and fully describes the specific capabilities of the card.
Opening statement by Timothy Miller:
"In the recent past, graphics hardware vendors have tightened their focus on Windows support and their competition with each other. The result is that what was once difficult has now become impossible: getting open source drivers for your graphics card. Members of the FOSS Community, such as myself, have identified this as a growing need for new graphics hardware that is designed with the FOSS community specifically in mind.
"To have open source driver support, the minimum that FOSS developers need is documentation on how the hardware works. Since other vendors don't provide documentation anymore, I decided it would be a top priority to provide it. Another thing that is often nice to have is vendor involvement in open source driver development. Since other vendors don't provide driver source anymore, this also would be a high priority. Even when they did provide documentation and source code, vendors were still providing very little information about the inner workings of their designs. I decided that our design process would be open; this way, many people can educate themselves about 3D graphics and developers can have more intimate knowledge of the inner-workings of the hardware to help them maximize conformance and performance. The open design process also ensures that we include all the features that are essential to the Free Software community.
"These principles boil down to what it means to be a cooperative member of the FOSS Community: Open specs, open drivers, and even open BIOS.
"The benefits will be many, not the least of which will be the ability to plug the graphics card into your computer, have it 'just work' out of the box with your favorite OS, and not have to worry about buggy proprietary drivers crashing your system.
"It has been a great privilege to work with the FOSS Community on this project up to this point. I know it's cliché to say it that way, but this project has come a LONG way since it started, and the community has played a major role in that progress. We have had to learn a great deal about 3D graphics in a short period, and we would never have been able to do it if it were not for a significant number of very knowledgeable people who have been helping us. It's wonderful to be able to ask questions and get prompt answers or post an algorithm and have people critique and correct it. The "many eyes" mantra of the open source movement really is true!"
Jeremy Andrews: In October of 2004 you started a discussion on the Linux Kernel Mailing list about creating an "Open Source Friendly Graphics Card". What exactly is an "Open Source Friendly Graphics Card?"
Timothy Miller: In the past, we could ask graphics companies for their specs, and they would grudgingly give them to us so that we could write drivers for their products. Unfortunately, in the recent past, the tide has turned, and all graphics manufacturers that did publish specs have stopped doing so. Cards that are supported by open source drivers are years old and fast approaching obsolescence. Before long, you will not be able to buy a new graphics card off the shelf that has open source driver support.
Since Linux and other Free operating systems have grown in popularity, graphics card makers have started paying attention to us, the FOSS Community. But despite the growing mind-share of Linux, their response to FOSS has been to produce closed drivers. While many people are satisfied with the closed drivers, they taint the Linux kernel (making it impossible to debug video-related issues and problematic to debug anything else), usually contain bugs that can't/won't be fixed, do not conform to established Linux driver methodologies, typically break with each new revision of the Linux kernel, and have quality and performance that is generally poor compared to their Windows counterparts. These companies certainly have both the financial and technical resources to make great drivers, yet the drivers they produce for Linux are sub-standard. 
I started the Open Graphics Project to rectify these problems. When I use the phrase "open source friendly", I am talking about disclosure. To make hardware work well with open source, you need open specifications. You need to be able to read about the full register interface, and the drivers need to be open source.
As a developer, I want to be able to know how the hardware works so I can write drivers for it. As a user, I want the drivers to be open source and incorporated into the mainline Linux kernel (and X11 and Mesa) so that when I plug the card in, it will JUST WORK.
The goal of the Open Graphics Project is open development of open-spec graphics hardware that will be supported by open source drivers.
My employer, Tech Source Inc., is now offering to fund that project, and should the project succeed, they will commit to making the product available as long as sufficient demand exists.
JA: How did you come up with this idea?
Timothy Miller: I am an experienced graphics chip designer and driver developer, working for an experienced graphics company. It seemed only natural that we should pay attention to market demands and give people what they want. Plus, I had trouble finding a new graphics card for my Linux boxes at home and work that would work properly with open source drivers, and I found that to be incredibly irritating.
JA: How long have you been designing graphics chips and drivers? What are some chips that you have worked on?
Timothy Miller: When I was in highschool, back in the late 80's, I developed a terminal program, ANSITerm, for my Atari ST which was unique in that it could display 80 column text in 16 colors, something that was "impossible" for the Atari ST but very important for most bulletin boards. For that, I had to write text rendering code in carefully-optimized 68000 assembly. I guess you could say that was one of my early exposures to something resembling graphics driver development.
When I was in college, my senior project was a ray tracer. Certainly nothing like Pov Ray, but it got me an A. I also took a grad course in graphics.
At Tech Source, I've developed the bulk of the code for X11 drivers for at least 13 different GPU chips from a number of different vendors, including our own chip, TROZ.
My first exposure to ASIC design was MOX. MOX (named for the X11 extension Multiple Overlay eXtension) is a video post-processing chip that manages multiple bit-plane groups of overlays on a per-pixel basis. It's necessary for ATC systems that need an efficient way to deal with multiple independent layers of information. MOX is an open standard that was published in the May 1996 issue of The X Journal, written by John Kulesza and Peter Feil. I didn't contribute much to the design of the MOX chip, although I did the X11 extension code for it.
In 2000, we decided to develop our own GPU to be optimized for ATC applications. We named this chip "TROZ" after something a famous cartoon character liked to say. I was picked to do the job, because I had the most intimate understanding of what we needed the chip to do. (Among other things, ATC (Air Traffic Control) systems require high resolution (2048x2048@60 and above), high dot rates, and certain minimum performance figures on various functions.) I taught myself Verilog and chip design in general, although I should say that learning Verilog is about 1% of what you need to know to design chips. In the end, I developed the HDL for the PCI interface, the rendering core, the video controller, and various bits of interconnect logic. Other engineers did the memory controller, the peripheral/PROM interface, and all the physical stuff involving timing analysis, floor-planning, layout, pinouts, electrical, etc. This chip is currently in use in a number of our products, mostly ATC.
I've also contributed to a number of FPGA designs since then, including video post-processing for Medical Imaging and an updated version of MOX.
Open Graphics Project:
JA: Does the project have an official name?
Timothy Miller: Depends on what you mean by "official". We're calling ourselves the "Open Graphics Project", and that's alright. We're still trying to come up with a good name for the chip.
JA: What's the current status of the graphics card?
Timothy Miller: We're winding down the second phase. The first phase was to get community feedback to find out what features are necessary for a finished product. The second phase was to implement a software model of the 3D renderer so that the correctness of our design could be proven.
Right now, the software model is nearly complete. At this point, it's been through a number of successful tests but needs more testing, and it needs the missing 2D features to be added in.
The design we have at present basically amounts to the second half of the OpenGL pipeline. We've been going from the OpenGL 2.0 spec, but we're leaving out a number of features that our experts determine to be not so important that they're worth impacting other features. I guess you could say that we implement most of OpenGL 1.3 plus a number of features from 1.4 onward. All geometry and vertex processing will be done in software in the host computer. That will generate rasterizer parameters for our hardware which implements a fixed-function fragment pipeline.
JA: What does this mean to a non-graphics card developer? Will performance suffer because geometry and vertex processing is done in software on the host computer?
Timothy Miller In short, yes, for a lot of really high-end 3D graphics, performance will suffer. Geometry and vertex processing takes a fair amount of computational power, and those GPUs that do it in hardware will naturally have a performance advantage.
But it needs to be clear that this project is intending to meet a common need, not push the bleeding edge of 3D technology. For 90% of what you want to do, its performance will be MORE than adequate. Our first priority is to provide acceleration for desktop and workstation graphics, including the new 3D GUIs that people are working on. (Think MacOS X whose "2D" desktop is really done entirely by the 3D engine; Longhorn will do the same, and KDE and GNOME are catching up too.) Our second priority is acceleration for the most common 3D graphics applications.
One of the most common causes of slow graphics cards is poorly-written drivers...
I haven't looked intensively at XFree86 drivers in a long time, but one thing I noticed about some of them in the past was that they didn't take advantage of parallelism between GPU and CPU. There are times when you have to wait on the GPU. One is when the command queue is full. Another is when the GPU is busy but you need to flush it so that you can directly access graphics memory (like if you need to read it). What I noticed is that the drivers would just busy-wait under those conditions. That's silly, because you're just burning CPU cycles for nothing. I am working on solutions to this problem.
Another way to squeeze out extra speed is to use DMA. Accessing the expansion bus (PCI, AGP, etc.) is VERY slow compared to accessing main memory. Even if you do sleep when you have to wait on the GPU, if you're programming the GPU via PIO (Programmed I/O, explicit reads and writes to the bus initiated by the CPU), you're still waisting loads of CPU cycles. The solution there is to put commands into main memory and talk directly to the GPU only to initiate DMA. This way, the CPU finishes its work quickly, and then the GPU can fetch commands in the background.
My intention is to include all the necessary features so that we can minimize wasted CPU cycles and maximize GPU utilization. This sort of thing will make up for all kinds of ways that our GPU might otherwise be considered "slow."
JA: What's involved in testing the 3d model? Who can be involved?
Timothy Miller: Working with the model is almost the same as writing directly to registers in a graphics chip on the AGP bus, although instead of writing to a memory region mapped to hardware, you're just filling in
At this point, we have successfully run a number of simple tests. Those include gouraud shading, texture mapping, and alpha blending. Writing tests involves a fair amount of computation. I've written some code which demonstrates how to compute rasterization parameters for the model, and I have written documentation to explain it.
There are lots of ways it needs to be tested:
There are many more ways to test it, and they all need to be tried.
Who can be involved? Anyone! I have written code that does some simple vertex processing, and people can use that to see how to use the model. I welcome everyone to have a look at the code and tinker with it, whether it's for curiosity, checking for OpenGL conformance, or trying to see who can find the largest number of bugs. :)
BTW, just because you may not know a lot about 3D graphics doesn't mean you can't help. I've gotten plenty of very useful assistance from people who CLAIM to be novices. They just underestimate themselves. :)
JA: What has been the most difficult aspect of the project so far?
Timothy Miller: I think the hardest part for me is maintaining the enthusiasm and attention in the FOSS Community. People who find out about the project are usually very interested, but the word doesn't spread or doesn't spread well enough. For instance, I responded to a recent slashdot link to a THG article on Linux gaming on open source, where I mentioned the Open Graphics Project. Someone responded to me, saying they thought the our project was 2D only. The project hasn't been 2D only since the first post I made to LKML in October, and that lasted about a day before it became clear that we had to do 3D.
Another problem is that once we've gotten attention, it's hard to maintain it. People want hardware in their hands, so it's hard for them to stay excited about an idea indefinitely. I'm hoping the release of our software model will get more people involved.
Another challenge has to do with tempering expectations. The first generation of this product won't be cutting-edge. But on the other hand, it's as good as many of the graphics cards that are currently in use by countless Linux users. We've had lots of people asking for advanced OpenGL 2.0 features like programmable vertex shaders, but some things are just unrealistic right now.
JA: What are some other major challenges that still need to be tackled?
Timothy Miller: I think at this point, they're all technical and financial. We have to actually design the chip. I have a really clear idea of what we need to do (in part because we've done it before), but it just takes time and effort.
The second problem is funding the project. Production of a graphics card is expensive and Tech Source will need to satisfy a number of financial questions before we start building graphics Printed Circuit Boards. That being said, the break-even point isn't that many cards, as the initial design is "modest" by most graphics standards.
Another "technical" problem is really more social and has to do with developing software for it. We want to go from nothing to a completed product by about June of 2005. While I have designed both graphics chips and graphics drivers myself before, the whole design effort is very time-consuming. Having the help of volunteers to develop the driver software will help the schedule considerably. The challenge here is that FOSS developers are used to having to beg for access to hardware specs. It's the good guys versus the big, mean hardware company. So when a hardware company comes along and asks people to assist in development of drivers, they get suspicious and worry they're going to get cheated. People expect to get taken advantage of by commercial vendors, and it's hard to convince them otherwise.
We will commit to developing initial versions of our drivers, but for the project to finish in a reasonable time, we're going to need a lot of help.
Supporting Free and Open Source Operating Systems:
JA: What's to keep Tech Source from closing down the next revision of this card, after developers have donated their time to create a functional driver?
Timothy Miller: Let me first say that I am first an advocate of Free Software. Only second am I an employee of Tech Source. My employer and I have a bond of trust, and I know that they would never ask me to violate my ethical principles.
(1) The truth is that we only have a really good idea of what to do in
the first place because people in the community were willing to help. It's also clear that our rate of progress would be much slower without their help. There's no way we're going to want to give that up.
(2) _I_ work there. The Open Graphics Project IS NOT A TECH SOURCE PROJECT. It's a community project that Tech Source is willing to fund. I came up with the idea, and after some discussion, my boss decided to let me do it. In return, there are certain things Tech Source wants to get out of it, and I think that's fair. But the point is that although Tech Source interests may influence the design, we are all together in a symbiotic relationship that would fall apart if either Tech Source or the FOSS Community dropped the ball.
(3) Balance. Tech Source's other business and the Open Graphics Project will benefit each other in many ways. I'm sure the community will appreciate how mission-critical design requirements and lessons learned from Air Traffic Control and Medical Imaging systems will creep into the hardware and software we develop for the Open Graphics Project. I know this sounds odd, but what we're building here is a relationship with the community, and you can't buy a relationship.
(4) Value. Tech Source would benefit from a long string of FOSS products. The FOSS community, being what it is, would quickly and wisely pull support from Tech Source's products if Tech Source acted in a heavy-handed manner.
JA: What information about the card will not be freely available?
Timothy Miller: Nothing of much value to the FOSS Community. The founding principles of the FOSS community are Free and Open Source SOFTWARE. In that regard, we are going well beyond the minimum level of disclosure. Every aspect of the design and interface is being carefully documented and made freely available. There's not much else to publish. There is the internal logic of the chip (HDL, Hardware Description Language), but looking at that would generally be more confusing than helpful to most people.
A few people have asked to see the HDL. One problem we face is that hardware design is expensive. Compilers are readily available and software is easy to copy. Hardware design tools, on the other hand, are expensive, and so too is the reproduction of chips and boards. Oh, and don't forget inventory, marketing, sales, packaging, and technical support, just to name a few. Tech Source needs to be able to recoup that expense. We feel it's important to keep the HDL as our "value add" so that another company doesn't copy the design before we've recouped the initial investment.
However, for a variety of reasons, the HDL will be published no earlier than when the initial investment is recouped and no later than when the product reaches end of life.
I don't think any other hardware vendor has ever before been so open and cooperative. No other hardware vendor releases their internal chip logic, so if anyone thinks we're not giving away enough by not releasing the HDL immediately, think again. I think what is being put forward is FAR more open than any hardware vendor has ever been before (excepting, of course, some of the computer kits you could buy in the 70's).
JA: What sorts of capabilities will this graphics card provide, and how will it compare to the current competition?
Timothy Miller: Just a few highlights:
OpenGL support includes most of what you'd expect, including trilinear filtering, multitexturing, etc. The rendering pipeline is fully floating-point, internally. Framebuffer pixel format is always 32-bit ARGB.
JA: What are some examples of 3D graphics applications that will benefit from this card?
Timothy Miller: In terms of existing software, there are lots of things that make use of 3D, like CAD and graphing software, games, animation, and the latest desktop environments. Games are the most important, and while we're not trying to develop cutting-edge technology here, many games will still perform very well on this hardware. In addition, new application developers can make use of the 3D features to perform special effects for TV applications.
JA: Earlier you mentioned that you still need to add some 2D features. What 2D features are missing?
Timothy Miller: Well, "missing" may be putting it too strongly. Most 2D operations are a simply a special case of 3D, so we can accelerate them just fine with the 3D pipeline as it is. There are some 2D things, however, that would be very inefficient to support that way.
For instance, simple repeating patterns can be accelerated as textures, but the most common size for color tiles is 8x8, and the most common for monochrome stipples is 32x32. Adding a little extra logic can make those common 2D operations a LOT faster and easier to program.
Another thing is text. For xfs (the X Font Server), which handles smooth/antialiased characters, the 3D pipeline is well suited to rendering them. But for native X11 fonts which are monochrome, a more efficient way of dealing with them would be nice. One thing that is still truly missing still is arbitrary 1D line patterns, because even if you use the 3D pipeline, the patterns are resticted to powers of two in length. (Note that it's actually uncommon to accelerate dashed lines anyway.)
Some other things not supported directly (but for which there will be already at least a reasonable way of supporting them) include XCopyPlane and non-power-of-two stipples and tiles.
I've had to deal with other graphics chips which did not support some of these things well. For those situations, I have developed tricks to making them perform well, despite a lack of direct support. I intend to share some of those algorithms.
For Windows GDI support, they have something called ROP3 and ROP4. In the X11 world, we're used to raster operators that you can set using XSetFunction(). These let you combine source and destination pixels according to 16 different logical operators. Windows ROP3 lets you combine source, destination, and a pattern in arbitrary ways (256 different logical functions), and ROP4 lets you combine those three also with a mask (65535 functions). I have added a bit of logic to support ROP3, but only if the pattern can be fit into 8x8. There's no direct support for ROP4, but you can emulate it using multitexturing here one of the textures sets the alpha channel to 0.0 or 1.0. Anything outside of these constraints will have to be supported in a less convenient way.
One of the challenges is that it's a bit awkward to combine 3D, which deals with floating point color values, with 2D, which deals with integer color values. As a result, I have put the 2D-specific stuff near the end of the pipeline, which has resulted in it being a bit less flexible than it would be if I could put it a lot earlier.
I've said all along that combining the 2D and 3D rendering (necessary due to cost constraints) would compromise 2D performance. However, I don't think it'll compromise 2D performance in a way that anyone would notice.
JA: What programs will benefit from these 2D features?
Timothy Miller: While we're focusing heavily on 3D in this project, 2D graphics is still dominant. All of your usual X11 and Windows applications are dependent on certain drawing features that are obsoleted or ignored by 3D. In order for existing 2D apps, like GNOME, OpenOffice.org, etc. to be efficient, most of those "legacy" 2D features still need to be supported.
JA: How well should these specs be able to handle the graphics intensive games that are currently out on the market, and those yet to be released?
Timothy Miller: Keep in mind that no graphics card on the market can fully support Doom III, with all features turned on, at a high framerate. So the fact that a card like this couldn't handle it shouldn't surprise anyone.
Also keep in mind that this card is targeted at an audience that values Free Software and system stability  over high performance.
However, there are lots of existing games that'll probably work just fine. It all depends on the nature of the game, really.
JA: How do you intend to assess whether enough people will buy this graphics card to meet minimum volumes? Is it possible that after all your development effort, the card will never be produced?
Timothy Miller: It's very hard to predict the market. There has been a lot of feedback to support the project so far, but plenty of people also say they'd rather buy a used Rage128 from eBay for $20. I'm just hoping enough Free Software believers will invest in their future and buy a product that is specifically intended to advance the availability of good hardware that is compatible with Free Software.
To reach large enough volumes, we're looking for other places to sell the product. There's a fair-sized market for embedded GPUs. If we can increase our volumes with the embedded folks, it'll allow us to succeed even if our first generation product isn't interesting enough to most people.
What drives me on this project is the idea of having the finished product in my hands. My dream is to be able to move my Radeon card from my Linux box into a Windows box where it belongs and replace it with one which is designed to work well with Free Software. I should consider the economic aspects of it more, but this is my "itch to scratch", as it were, and the drive to do it is very compelling.
Here's how people can help with this: If you are a Linux hardware vendor, contact Tech Source. If you KNOW a Linux hardware vendor, contact them and tell them to contact Tech Source. If we could get three Linux hardware resellers to commit to buying cards in volume, that would help solve some of our challenges.
JA: At what price range does Tech Source intend to market the graphics card?
Timothy Miller: We haven't yet determines the retail price. It really depends on the kind of volume we can get. The greater the volume, the lower the parts cost and the more that development costs can be amortized against sales volume.
Note that we are taking every effort to minimize the price. We need to strike a balance between offering an affordable product and making enough extra to be able to fund future products. Our hope is that the price will be a maximum of $200.
For those who want to voice their opinion, one of our mailing list members put up a petition/poll.
JA: How many cards would have to be sold in order for Tech Source to break even?
Timothy Miller: Since we haven't yet finalized our design, we can't predict the parts cost. We also don't have a clear idea of the volumes, which affects parts costs. And on top of that, we have to amortize development costs. At this point, if we got a clear commitment for 10,000 cards, we'd be in good shape.
JA: When can I expect to be able to buy one of these graphics cards?
Timothy Miller: I'm an optimist. I'm shooting for June of 2005. If we can get enough help from the community, I believe we can do it.
JA: Is it, or will it be possible to pre-order a card?
Timothy Miller: We are considering the idea of accepting pre-orders. There are two ways to go about it, and I don't see why we couldn't do both at the same time. One is a "promise to buy" which states that you intend to buy one when it comes out, but you're not giving us your credit card number. The other is an actual "order" where we do take your credit card number but don't charge it until we ship.
JA: What operating systems do you intend to support by June?
Timothy Miller: At the very least, Linux on x86. The hope is that the FOSS Community will help us quickly port to all other platforms.
There are other important platforms we want to get to quickly, like *BSD and Linux for PowerPC. Due to the fact that the drivers and specs are open, it shouldn't be hard for us all to work together to port to any platform people like.
BTW, we will be offering engineering samples of the card. If you are a developer and you want a pre-production version of the card so you can develop drivers for your favorite platform, talk to us about it! If enough developers get on board, we can have drivers for dozens of platforms ready in time for the hardware roll-out.
JA: Do you expect to support non-free operating systems, too?
Timothy Miller: I want Windows drivers mostly for people who dual-boot. The ReactOS people have expressed some interest in that, and we'd love to have their support in developing for ReactOS and Windows.
There are really no restrictions on what platforms people can port to. If you can develop drivers at all, you can develop drivers for this card. Drivers for all platforms will be as open source as is legal, and the specifications will be out there for anyone to look at.
JA: If everything works out, and you sell enough cards to make a profit, what happens next?
Timothy Miller: The very next thing to support is multi-head. The two most important features people ask for are TV-out and multi-head. We think we'll already have TV-out, so we'll follow the single-head version with a dual-head. If there's still more demand, we can go on to do quad-head.
Following that, we'll start working on enhancing the design to include more features that people want. These include things like cube-mapping, 3D textures (voxels), programmable fragment processing, and hardware-accelerated geometry (vertex processing).
I hesitate to talk too much about future plans, because if people decide to skip the first version and wait around for the next version, there won't be a next version to wait for. Future products are contingent on the success of the first one!
JA: Thanks for your time answering these questions, and for your efforts in making an open source friendly graphics card! I'll be standing in line to preorder, and eager to try my new card in June.
Timothy Miller: Thank you. I can't wait for it either!