No, its not bad at all. I am just looking at it from the stand point of a "low cost" server. Granted, 10GbE isn't low cost yet, so kind of what is the point of the rest of the server being $300 if it can't keep up even.
Its at least simpler from a lot of angles to go 10GbE instead of multiple 1GbE links (even with SMB Multichannel) once you exceed 1GbE speeds. Most single hard drives that are newer can easily push beyond Gigabit ethernet speeds, let alone an SSD or a RAID array. My much older RAID array in RAID0 can push over 200MB/sec and when it was somewhat less full could saturate my 2x1GbE link. With a couple of new 3TB drives I expect it could do justice to a 3x1GbE link.
It would, however, be much simpler if I could just do 10GbE, even if the drives couldn't keep up with the network link.
At least for Z77, not sure about Z87, there are 16x PCIe 3.0, but also 4xPCIe 2.0. 4xPCIe isn't enough for any 10GbE board I have seen, but it doesn't mean it couldn't be. 4 lanes at 2.0 speeds are theoretically 2000MB/sec, even with over head its something like 1700-1800MB/sec. Its not 10GbE full duplex concurrent, but it could easily handle half duplex at 10GbE and still leave room for a fair amount of opposite direction traffic.
Heck, there are plenty of PCIe 1.0a cards that are single lane and gigabit ethernet. With overhead, that is only around 220MB/sec bandwidth, which is a bit less than GbE full duplex and at least so far I haven't noticed any real world impacts of that (granted, with my use case, I am rarely doing more than maybe maxing Rx or Tx with only limited traffic in the other direction for ACK and occasionally a "slow" link back, such as streaming from my server as I am shoving a bunch of files to it).
Or, as a dedicated server, like mine, I have nothing populating my 16x slot. I have a pair of 1x NICs in there and nothing else. You could easily do a pair of 8x slots with a RAID card and a 10GbE card, or if it is an onboard NIC, connecting through the CPU PCIe lanes (instead of the PCH, which I think has 8 lanes for everything (at least as of 7 series chipsets, not sure about earlier), which is usually what the PCIe 1x lanes are connected through as well as SATA/SAS) you could use 8x of those PCIe lanes dedicated to that.
Or if it was PCIe 3.0 capable, just dedicate 4 lanes to it.
Actually...that is my biggest beef with networking vendors right now. EVERYONE seems to be stuck in years past. I still see most gigabit NICs shipping with only 1.0a support. I see a small handful of newer cards, mostly server dual/quad port cards, that have 2.0 support. Very few though. Nothing with PCIe 3.0, at least no on the GbE front. I don't really see anything on the 10GbE front either, though I haven't been looking too closely at that stuff lately.
It just doesn't make a lot of sense to me. GbE is still common/important in a lot of SOHO/SMB servers and networking gear. You could make a quad port card on PCIe 3.0 with a single lane and handle 80% of the bandwidth the thing could generate with all ports at full duplex. Or even a dual port card on PCI3 2.0 as a single lane. None of this taking up 4 lanes on a dual port/quad port card with PCIe 1.0a.
It just doesn't seem to make any sense to me at all. Though I'll be the first to admit I know little about enterprise networking and only a bit about SMB networking (but a lot about SOHO networking). I guess its personal whim. I can't help looking at things like the new Bay Trail mITX boards, especially as cheap as they are, and thinking, if there was just a single lane dual port PCIe 2.0 NIC out there, one of those on the board, IF it supported RAID (which I don't think any of them do), with a couple of drives in RAID0 plus a small mSATA SSD as the boot drive and I'd have an AWESOME micro file server for cheap.
Alas, my whims are not to be catered to.