Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

Physical DPOD Cell Members that are required to process high

...

transactions per second (TPS) load include 4 CPU sockets and NVMe disks for maximizing server I/O throughput.

...

DPOD is using NUMA (Non-Uniform Memory Access) technology to bind each of the Store's logical nodes

...

to specific physical processor, disks and memory in a way that will minimize the latency of persisting data to disks.

Note: If the cell member server does not have 4 CPU sockets or does not have NVMe disks - do not perform the steps in this document.

Enabling NUMA in BIOS

Make sure to enable NUMA in the physical server's BIOS. You may need to consult with the hardware manufacturer documentation on how to achieve that.

Note: The number of NUMA nodes configured in BIOS should be 4 (should match the amount of physical CPU sockets in the server).
Some servers allow increasing the NUMA nodes number (e.g. double the number of CPU sockets), which is not suitable for DPOD.

Installing RAM Modules

Use the hardware manufacturer documentation to install the same amount of RAM for each one of the CPUs of the physical server.

Verify NUMA

Once NUMA has been enabled in BIOS and RAM modules have been installed, verify the installation using the following command:

Code Block
/app/scripts/gather_nvme_info.sh


Expected output:

Code Block
languagebash
2023-12-19_15-16-52: INFO  Gathers NVME Information
2023-12-19_15-16-52: INFO  ========================
2023-12-19_15-16-52: INFO  Log file is /tmp/gatherNvmeInfo_2023-12-19_15-16-52.log
2023-12-19_15-16-52: INFO  Identifying disk OS paths...
2023-12-19_15-16-52: INFO  Identifying disk slot numbers...
2023-12-19_15-16-55: INFO  Creating output...

  Serial Number      | Disk OS Path | Disk Speed   | PCI Slot Number | NUMA

...

 Code (Cpu #) 
  ------------------------------------------------------------------------------------ 
  PHLE8221029C3P2EGN | /dev/nvme0n1 | 8GT/s        | 5d:00.0         | 1               
  PHLE822100SM3P2EGN | /dev/nvme1n1 | 8GT/s        | 5e:00.0         | 1               
  PHLE822100X83P2EGN | /dev/nvme2n1 | 8GT/s        | ad:00.0         | 2               
  PHLE8221027N3P2EGN | /dev/nvme3n1 | 8GT/s        | ae:00.0         | 2               
  PHLE822100X63P2EGN | /dev/nvme4n1 | 8GT/s        | c5:00.0         | 3               
  PHLE822102CJ3P2EGN | /dev/nvme5n1 | 8GT/s        | c6:00.0         | 3               

2023-12-19_15-16-55: INFO  Output file: /tmp/gatherNvmeInfo_2023-12-19_15-16-52.csv