Skip to main content

Specific PCIe ports goes to CPU and not via southbridge

According to an article over at scottlowe.org, the 'the [Intel Xeon] E5 processors integrate PCI Express root ports, providing upwards of 200 Gbps of throughput. This is compared to the use of the “Southbridge” with the 5500/5600 series CPUs, which were limited to about 50 Gbps.'

Later you can read you should use specific slots on the motherboard for 10GbE PCIe NICS (and probably
other high throughput cards):
Going back to the earlier discussion about PCIe root ports being integrated into the E5 CPUs, this leads to a consideration for the placement of PCIe cards. Make sure your high-speed cards aren’t inserted in a slot that runs through the C600 chipset southbridge. Make sure that you are using Gen2 x8 slot, and make sure that the slot is actually wired to support a x8 card (some slots on some systems have a x8 connector but are only wired for x4 throughput). Johnson recommends using either LoM, slot 2, slot 3, or slot 5 for 10 GbE PCIe NICs; this will ensure direct connections to one of the CPUs and not to the southbridge chipset.

Comments

Popular posts from this blog

Force Dell BIOS Upgrade

I just experienced a problem upgrading a Huawei N8300 OceanStor NAS Engine node (OEM'ed Dell PowerEdge R710 server). Running the linux binary update file looked good and it asked me to reboot. After reboot the same old v2.1.15 BIOS was there, not the latest v6.4.0 (as of writing this small post). Next up was creating a FreeDOS Bootable ISO with the bios update program included (see this page for how to do that in Windows). Running the BIOS upgrade program from FreeDOS (virtual media and DRAC) I got this error: Cannot use a "Dell System PowerEdge R710" BIOS in a " -  " Pres any key to exit. ROM update not performed. After some googleing I found the solution on Dell Community Forum - use the /FORCETYPE option. So to force the update, I just ran the update package with that option (I had renamed the file for 8.3 DOS filename): R710-640.exe /FORCETYPE And thats it. v6.4.0 BIOS up and running :)

How to configure multiple VLANs on QNAP TS-869U

It's unbelievable that QNAP still doesn't support multiple VLANs on a single bond0 interface via GUI when they now just released the QTS v4.1.0 NAS Operation System for QNAP. The underlying Linux OS (QTS) does support it, and there should at least not be any problems with Intel chipsets. Some are reporting problems with Marvell.. but I haven't tried. I wanted to use the QNAP as a iSCSI storage for my LAB using a second interface while having full redundancy and get max bandwith (2x1GbE) for my ESXi hosts, and I didn't want that interface routed. At the same time I of course need the possibility to manage the QNAP system via the main interface, which is routed. This CLI "hack" will at configure the QNAP for a second VLAN interface that will be persistent during reboots. It's not been verified that it works after an upgrade of the firmware, but I presumed it will. To get this to work I presume you already have the following working: Network onfigured