In this case 2 x 12 = 24 is the "magic number" we are looking for with our Conferencing Nodes - which is double the amount of Cores per Socket. With hyperthreading, each physical core has 2 logical processors, so the CPU has 24 logical processors (giving us a total of 48 with both CPUs). In the example screenshot below, the E5-2680 v3 CPU has 12 physical cores per CPU socket, and there are two CPUs on the server. (Hyperthreading must always be enabled, and is generally enabled by default.) Count logical processorsįirst you must check how many logical processors each CPU has. You must now increase the number of vCPUs assigned to your Conferencing Nodes, to make use of the hyperthreaded cores. If both are set to 0, you will effectively only use numa node 0, and they will fight for these resources while leaving numa node 1 unused. It is very important that you actually set numa.nodeAffinity to 1 and not 0 for the second node. Now our conf-node_numa1 Virtual Machine is locked to numa1 (the second socket). Repeat the above steps for the second node, entering the following data for the second VM, which should be locked to the second socket ( numa1): Now our conf-node_numa0 Virtual Machine is locked to numa0 (the first socket). It should now look like this in the bottom of the parameters list: At the bottom of the window that appears, enter the following Names and corresponding Values for the first VM, which should be locked to the first socket ( numa0):.Right-click the first Conferencing Node VM in the inventory and select Edit Settings.įrom the VM Options tab, expand the Advanced section and select Edit Configuration:.In the example below the VM names are suffixed by numa0 and numa1: Give the Conferencing Node VMs names that indicate that they are locked to a given socket (NUMA node). Shut down the Conferencing Node VMs, to allow you to edit their settings.Setting NUMA affinityīefore you start, please consult your local VMware administrator to understand whether this is appropriate in your environment. Note that if you are experiencing different sampling results from multiple nodes on the same host, you should also ensure that Numa.PreferHT = 1 is set (to ensure it operates at the ESXi/socket level). You must also double-check the flag below to ensure it matches the number of vCPUs in the Conferencing Node:įor example, it should be set to 24 if that was the number of vCPUs you assigned. We will configure the two Conferencing Node VMs (in this example, an E5-2600 CPU with two sockets per server) with the following advanced VMware parameters: You fully understand what you are doing, and you are happy to revert back to the standard settings, if requested by Pexip support, to investigate any potential issues that may result.Įxample server without NUMA affinity - allows for more mobility of VMsĮxample server with NUMA affinity - taking advantage of hyperthreading to gain 30-50% more capacity per server Overview of process.(Using this may result in having two nodes both locked to a single socket, meaning both will be attempting to access the same processor, with neither using the other processor.) The server/blade is used for Pexip Conferencing Node VMs only, and the server will have only one Pexip Conferencing Node VM per CPU socket (or two VMs per server in a dual socket CPU e.g.VMware NUMA affinity for Pexip Conferencing Node VMs should only be used if the following conditions apply: Please ensure you have read and implemented our recommendations in Achieving high density deployments with NUMA before you continue. This information is aimed at administrators with a strong understanding of VMware, who have very good control of their VM environment, and who understand the consequences of conducting these changes. NUMA affinity is not practical in all data center use cases, as it forces a given VM to run on a certain CPU socket (in this example), but is very useful for high-density Pexip deployments with dedicated capacity. one per logical thread), you must first enable NUMA affinity if you don't, the Conferencing Node VM will end up spanning multiple NUMA nodes, resulting in a loss of performance.Īffinity does NOT guarantee or reserve resources, it simply forces a VM to use only the socket you define, so mixing Pexip Conferencing Node VMs that are configured with NUMA affinity together with other VMs on the same server is not recommended. If you are taking advantage of hyperthreading to deploy two vCPUs per physical core (i.e. This topic explains how to experiment with VMware NUMA affinity and Hyper-Threading Technology for Pexip Infinity Conferencing Node VMs, in order to achieve up to 50% additional capacity.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |