...
Note: The number of NUMA nodes configured in BIOS should be 4 (should match the amount of physical CPU sockets in the server).
Some servers allow increasing the NUMA nodes number (e.g. double the number of CPU sockets), which is not suitable for DPOD.
Installing RAM Modules
Use the hardware manufacturer documentation to install the same amount of RAM for each one of the CPUs of the physical server.
Verify NUMA
Once NUMA has been enabled in BIOS and RAM modules have been installed, verify that the number of NUMA nodes matches the number of CPU sockets the installation using the following command.
Make sure the RAM size of each node is the same and that there are 4 nodes available:
Code Block | ||||
---|---|---|---|---|
| ||||
numactl -sH | grep cpubind -e size -e available Expected output: for available: 4 CPU sockets cell members: cpubind: 0 1 2 3 |
...
nodes (0-3)
node 0 size: 128292 MB
node 1 size: 128994 MB
node 2 size: 129010 MB
node 3 size: 129009 MB
|
Required Information for NVMe Disks
The following table contains the list of installed disks and additional information that must be gathered in order to create the mount points required for federating the DPOD cell member to the cell environment.
Please copy this table, use it during the procedure, and complete the information as you follow the procedure.
The table should have 6 or 9 rows, according to the number of disks installed in your server.
...
Code Block | ||||
---|---|---|---|---|
| ||||
lspci -nn | grep NVM | awk '{print $1}' | xargs -Innn bash -c "printf 'PCI Slot: nnn '; ls -la /sys/dev/block | grep nnn" Expected output: PCI Slot: 0c:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:2 -> ../../devices/pci0000:07/0000:07:00.0/0000:08:00.0/0000:09:02.0/0000:0c:00.0/nvme/nvme0/nvme0n1 PCI Slot: 0d:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:5 -> ../../devices/pci0000:07/0000:07:00.0/0000:08:00.0/0000:09:03.0/0000:0d:00.0/nvme/nvme1/nvme1n1 PCI Slot: ad:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:1 -> ../../devices/pci0000:ac/0000:ac:02.0/0000:ad:00.0/nvme/nvme2/nvme2n1 PCI Slot: ae:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:0 -> ../../devices/pci0000:ac/0000:ac:03.0/0000:ae:00.0/nvme/nvme3/nvme3n1 PCI Slot: c5:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:3 -> ../../devices/pci0000:c4/0000:c4:02.0/0000:c5:00.0/nvme/nvme4/nvme4n1 PCI Slot: c6:00.0 lrwxrwxrwx. 1 root root 0 May 16 10:26 259:4 -> ../../devices/pci0000:c4/0000:c4:03.0/0000:c6:00.0/nvme/nvme5/nvme5n1 Tip: you may execute the following command to list the details of all PCI slots with NVMe disks installed in the server: lspci -nn | grep -i nvme | awk '{print $1}' | xargs -Innn lspci -v -s nnn |
Identifying NUMA
...
Nodes
To list the NUMA node of each PCI slot, execute the following command.
...