HDD Killers


Posted on Wednesday 18 January 2006

I thought I’d make another blog entry, since I haven’t blogged in a while. This time it’s to clear up a bit of a mis-understanding surrounding PCI-X and PCI-E. PCI-E stands for PCI-Express, PCI-X is just PCI-X, and they are not the same as I had believed up until recently.

After the blog entry I made on the 22nd, “Server Troubles“, in which I wrote, “We think that it’s limited because my RAID card is in a PCI slot and not a PCI-Express slot which has a lot more bandwidth.” I decided that I would replace the motherboard in the server with a PCI-Express board so that the RAID card would not be limited by the PCI slot.

I’d already make a couple of assumptions by this point that I probably shouldn’t have made, however, I’ll come back to that later. So I started looking for a socket 754 motherboard that had the following characteristics:

  • micro ATX form factor
  • onboard graphics card
  • onboard gigabit LAN
  • at least one 16x PCI-Express lane
  • socket 754

The micro ATX wasn’t really neccessary, but one day I eventually plan on putting the entire PC in a 19″ rack, so a micro ATX board would probably increase the number of cases available. As it turns out, I didn’t find a single board with all of the last 4 characteristics, which is somewhat annoying. I was about to reisgn myself to buying a low profile PCI graphics card and then a board with the last 3 things on the list, until I got bored one day and started reading up on PCI-Express on google.

It turns out that a PCI-Express 1x lane has a total bandwidth of 625 MB/s (megabytes per second), and that there are two parity bits to every eight data bits, so the overall effective bandwidth is 500 MB/s. That’s 250 MB/s to and from the lane. That’s really fast, almost as fast as a SATA-II connection. 250 MB/s is just less than 2 Gbps (gigabits per second), which is twice the bandwidth that a gigabit network has.

This got me thinking, why was the RAID card I had the same size as a 16x lane, when it only needed an 8x lane. So I did some more reading. It was at this point that I found out that PCI-X and PCI-E are not the same.

PCI-X is an extension of PCI, and was designed for servers, but was deemed to be too expensive to be ported over to the desktop market. PCI slots run at 33MHz and a 32bit bus, this means that every tick of the clock it transfers 4 bytes of data, which equates to 133MB/s. PCI-X runs at 66MHz and has a 64bit bus (twice the pins, hence it’s a physically bigger slot), so it runs twice as fast and transfers twice the amount of data per clock, 533MB/s. Now, the problem with PCI and PCI-X is that this 133 MB/s (or 533MB/s) bandwidth is split between all of the PCI/PCI-X slots on that bus, but with PCI-Express each individual lane gets the full 500MB/s of bandwidth.

That means that even a tiny 1x PCI-E lane get’s almost the same amount of bandwidth as all of the PCI-X slots on a server motherboard. PCI-E is really f*cking fast, PCI isn’t.

However, something I did discover is that PCI is fast enough. My RAID card is currently sitting in a PCI slot, and it’s the only PCI card I have in that machine, therefore it get’s the full 133MB/s all to itself. So I found a benchmark program called HD Tach that tests the read speed of drives, and I ran it on the RAID array to find out how much bandwidth I really had. The result was this graph.

I also tested the drive that I have in this PC, here is the graph for that, much less impressive, but certainly not bad. As you can see, the read rate starts high, then tails off as the actuator arm moves in towards the centre of the disk. The disk still spins at 7200 rpm, but as the distance from the centre decreases, there is less disk surface moving past the actuator arm, therefore the read rate also decreases. Since the graph for the RAID array does not tail off like this, I know that it must be the PCI slot limiting it, not the drives.

So for the array I got an average rate of 102.1 MB/s, or more accurately 97.37 MB/s (since the program considers 1,000,000 bytes to be 1 MB, how wrong it is). So if the full bandwidth of PCI is 133MB/s and I’m getting about 100MB/s then I’d say that’s fairly good. That’s 779 Mbps, which is most of a gigabit network connection, so it seems that I don’t need to upgrade my motherboard at all. It must have been something else that was limiting the gigabit network, and I think I know what.

On my travels I also found this article, it briefly touches on gigabit network cards in PCI slots, however I already knew about the limitations of PCI slots at this point. It does however offer a couple of ways to improve network transfer speeds and I have made all of the appropriate changes. I have been unable to test the changes as of yet though, because even though I have two PC’s here with gigabit network cards in, the server does not seem to like connections made to it’s gigabit connection at the moment. In fact, it’s never really liked connections to it’s gigabit connection, I think I’ll have to reinstall the operating system on it then give it another try.

    Tuesday 24th January 2006 | 5:25 am
    rhsunderground's Globally Recognised Avatar


    Tuesday 24th January 2006 | 8:02 pm
    David's Globally Recognised Avatar


    Ben Rogers
    Thursday 26th January 2006 | 8:12 pm
    Ben Rogers's Globally Recognised Avatar

    You’re not a nerd, you’re just David, ’nuff said. I have never known a non-geeky David.

Sorry, the comment form is closed at this time.