IBM DataPower Operations Dashboard v1.0.17.0

A newer version of this product documentation is available.

You are viewing an older version. View latest at IBM DPOD Documentation.

Configuring Cell Members with 4 CPU Sockets and NVMe Disks

Overview

Physical DPOD Cell Members that are required to process high transactions per second (TPS) load include 4 CPU sockets and NVMe disks for maximizing server I/O throughput.

DPOD is using NUMA (Non-Uniform Memory Access) technology to bind each of the Store's logical nodes to specific physical processor, disks and memory in a way that will minimize the latency of persisting data to disks.

Note: If the cell member server does not have 4 CPU sockets or does not have NVMe disks - do not perform the steps in this document.

Enabling NUMA in BIOS

Make sure to enable NUMA in the physical server's BIOS. You may need to consult with the hardware manufacturer documentation on how to achieve that.

Note: The number of NUMA nodes configured in BIOS should be 4 (should match the amount of physical CPU sockets in the server).
Some servers allow increasing the NUMA nodes number (e.g. double the number of CPU sockets), which is not suitable for DPOD.

Installing RAM Modules

Use the hardware manufacturer documentation to install the same amount of RAM for each one of the CPUs of the physical server.

Verify NUMA

Once NUMA has been enabled in BIOS and RAM modules have been installed, verify the installation using the following command.
Make sure the RAM size of each node is the same and that there are 4 nodes available:

numactl -H | grep -e size -e available

Expected output:

available: 4 nodes (0-3)
node 0 size: 128292 MB
node 1 size: 128994 MB
node 2 size: 129010 MB
node 3 size: 129009 MB

Required Information for NVMe Disks

The following table contains the list of installed disks and additional information that must be gathered in order to create the mount points required for federating the DPOD cell member to the cell environment.
Please copy this table, use it during the procedure, and complete the information as you follow the procedure.
The table should have 6 or 9 rows, according to the number of disks installed in your server.

Disk BayDisk SerialDisk OS PathPCI Slot NumberNUMA Node (CPU #)















Installing NVMe Disks in the Correct Disk Bays

Use the hardware manufacturer documentation to find out which disk bay is bound which of the CPUs. CPUs should be numbered from 0 to 3.

You should install the same number of NVMe disks (2 or 3) for CPUs 1, 2 and 3. CPU 0 should not have any NVMe disks bound to it.

Update table: Write down the disk bay and the disk's serial number by visually observing the disk and the bay where it is installed.

Identifying Disk OS Paths

To list the OS path of each disk, execute the following command.

Update table: Write down the disk OS path (e.g.: /dev/nvme0n1) according to the disk's serial number (e.g.: PHLE8XXXXXXC3P2EGN).

nvme -list

Expected output:
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev 
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     PHLE8XXXXXXC3P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46
/dev/nvme1n1     PHLE8XXXXXXM3P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46
/dev/nvme2n1     PHLE8XXXXXX83P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46
/dev/nvme3n1     PHLE8XXXXXXN3P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46
/dev/nvme4n1     PHLE8XXXXXX63P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46
/dev/nvme5n1     PHLE8XXXXXXJ3P2EGN   SSDPE2KE032T7L                           1         3.20 TB / 3.20 TB          512 B + 0 B      QDV1LV46

Identifying PCI Slot Numbers

To list the the PCI slot for each disk OS path, execute the following command.

Update table: Write down the PCI slot (e.g.: 0c:00.0) according to the last part of the disk OS path (e.g.: nvme0n1).

lspci -nn | grep NVM | awk '{print $1}' | xargs -Innn bash -c "printf 'PCI Slot: nnn     '; ls -la /sys/dev/block | grep nnn"

Expected output:
PCI Slot: 0c:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:2 -> ../../devices/pci0000:07/0000:07:00.0/0000:08:00.0/0000:09:02.0/0000:0c:00.0/nvme/nvme0/nvme0n1
PCI Slot: 0d:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:5 -> ../../devices/pci0000:07/0000:07:00.0/0000:08:00.0/0000:09:03.0/0000:0d:00.0/nvme/nvme1/nvme1n1
PCI Slot: ad:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:1 -> ../../devices/pci0000:ac/0000:ac:02.0/0000:ad:00.0/nvme/nvme2/nvme2n1
PCI Slot: ae:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:0 -> ../../devices/pci0000:ac/0000:ac:03.0/0000:ae:00.0/nvme/nvme3/nvme3n1
PCI Slot: c5:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:3 -> ../../devices/pci0000:c4/0000:c4:02.0/0000:c5:00.0/nvme/nvme4/nvme4n1
PCI Slot: c6:00.0     lrwxrwxrwx. 1 root root 0 May 16 10:26 259:4 -> ../../devices/pci0000:c4/0000:c4:03.0/0000:c6:00.0/nvme/nvme5/nvme5n1

Tip: you may execute the following command to list the details of all PCI slots with NVMe disks installed in the server:
lspci -nn | grep -i nvme | awk '{print $1}' | xargs -Innn lspci -v -s nnn

Identifying NUMA Nodes

To list the NUMA node of each PCI slot, execute the following command.

Update table: Write down the NUMA node (e.g.: 1) according to the PCI slot (e.g.: 0c:00.0).

lspci -nn | grep -i nvme | awk '{print $1}' | xargs -Innn bash -c "printf 'PCI Slot: nnn'; lspci -v -s nnn | grep NUMA"

Expected output:
PCI Slot: 0c:00.0	Flags: bus master, fast devsel, latency 0, IRQ 45, NUMA node 1
PCI Slot: 0d:00.0	Flags: bus master, fast devsel, latency 0, IRQ 52, NUMA node 1
PCI Slot: ad:00.0	Flags: bus master, fast devsel, latency 0, IRQ 47, NUMA node 2
PCI Slot: ae:00.0	Flags: bus master, fast devsel, latency 0, IRQ 49, NUMA node 2
PCI Slot: c5:00.0	Flags: bus master, fast devsel, latency 0, IRQ 51, NUMA node 3
PCI Slot: c6:00.0	Flags: bus master, fast devsel, latency 0, IRQ 55, NUMA node 3

Verifying Required Information

Your required information table should be complete by now.

Make sure you have gathered information about all the installed NVMe disks, and that NUMA nodes are between 1 and 3 (and do not include NUMA node 0).

Verifying NVMe Disks Speed

Execute the following command and verify all NVMe disks have the same speed (e.g.: 8GT/s):

lspci -nn | grep -i nvme | awk '{print $1}' | xargs -Innn bash -c "printf 'PCI Slot: nnn'; lspci -vvv -s nnn | grep LnkSta:"

Expected output:
PCI Slot: 0c:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
PCI Slot: 0d:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
PCI Slot: ad:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
PCI Slot: ae:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
PCI Slot: c5:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
PCI Slot: c6:00.0		LnkSta:	Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

IBM DataPower Operations Dashboard (DPOD) v1.0.17.0