Microsoft Azure AZ-800 — Section 11: Manage Hyper-V and guest virtual machines Part 4

Microsoft Azure AZ-800 — Section 11: Manage Hyper-V and guest virtual machines Part 4

88. Configure VM CPU Groups

Now, I want to take some time and explain a concept to you, known as virtual machines, CPU groups.

Now this feature was originally introduced in Windows Server 2016, and the goal of virtual machine CPU groups is to allow us to better control the allocation of our virtual CPU’s, our virtual central processing units.

So, when you think about a server that’s going to be hosting virtual machines, you’ve got the physical motherboard and the physical processors and then you’ve got cores. And then those cores add up to what are known as virtual CPU’s, which are logical processors that ultimately are utilized, which are virtual machines. And what we’re trying to accomplish is we want to be able to control exactly how much processing power each one of our virtual machines are actually going to be allocated with.

So VM CPU groups is a feature that’s going to assist us with that now to understand virtual CPU groups. The first step, of course, is to understand the basics of the processing models that are used in general.

So let’s take a look at that now. I’m going to draw a little bit out, try to help you visualize that.

So, when we think about the concept of our servers and how they deal with the processing power, we kind of go back to the basics of how processing works now. Originally, you know, we would set up a server. Let’s say this going to represent a server. And then of course, you know, that server would have a motherboard. All right, inside of it, it’s going to be your motherboard and then inside them on the motherboard, we would have processors, right CPUs and so you would originally purchase. You know, motherboards that can support a certain amount of proof CPU slots. Right? Maybe, we’ve got for that. We’re going to put here and this would be like a quad processor based motherboard. And and we had this concept known as SMP, which was symmetric multiprocessing. And so with the symmetric multiprocessing, your operating system could essentially utilize the multiple processors and process information at simultaneously working with those different processors. In other words, we could divide the instructions that go into the processor. In other words, these are called threads. Threads are a mathematical unit of of being processed through your processor.

Now, as time went on, one of the interesting things that occurred is if you kind of zoom in, let’s say you’re zooming in on a CPU.

OK. Here is a CPU. You’ve seen what they what they were able to do in the industry as they were able to create what is known as multicore. Right. And so with multicore, your processor is not just a single unit anymore. It’s really like having multiple units. And I’m just going to say we have four cores, although processors can handle more than four cores. I’m just going to put forward here.

So each one of these is going to represent a core. All right. And I’ll just kind of highlight that in in this kind of yellow gold color just because, you know, processors usually a gold.

So anyway, we have multiple cores now. All right.

So, we’ll say we’ll say we’ve got a core here core. And then core. And core. All right now, and depending upon the hypervisor you’re using. Of course, we’re learning Hyper-V. You can support a certain amount of what are called virtual processors. And so virtual processors allow you to divide up your processing power over to your virtual machines a certain way.

Now in Hyper-V, we generally do what are known as eight virtual processors per core.

OK.

So. In other words. You have eight the CPRS, right? And that would correspond with this core right. You would have eight. For this core. Eight this core. And. Eight for this final court.

OK, giving you a grand total of. Right? Eight times four. All right.

So, it’s going to give you 30 two virtual CPU’s at your disposal if you have a, you know, on that just that single CPU. And then you can multiply that times four because you’ve got four CPU’s. All right.

So this gives us a lot of power. And with the help of CPU groups, I’m able to group all of that together and then I can assign groups of these virtual CPU’s over to my virtual machines, which helps me better control the amount of processing power each one of my virtual machines have. All right.

Now on the Hyper-V side of this limit me hop over in Hyper-V and I’ll show you a little bit about where your virtual CPUs are managed within Hyper-V.

OK, so here you’ll notice I’ve got mine a couple of virtual machines running here, and if I want to adjust the virtual CPU limit, I can right click a virtual machine and click settings and then go to where it says processor. Unfortunately, I actually can’t manage this here while the virtual machine is running, so, I would need to turn off my virtual machine. I’m actually just going to use one of these that’s currently off. But if you were wanting to play around with this yourself, you might have to shut your virtual machine down in order to do this. But I’m going to go here to settings and then from there I can click on the processor and you’ll notice it’ll allow me to adjust it.

So watch what happens when I try to up this virtual processor? No, as soon as I get to the number nine, I get this little message down here at the bottom that says the virtual machine is configured with the following two sockets NUMA nodes per socket one virtual processors per Numa node as a in the memory Poornima in node is and they tell you the amount now. First off, what is Numa? Numa is non-uniform memory access or memory allocation, and the idea of Numa is if your server environment supports it, your processor sockets can have dedicated RAM for each processor.

So, in the old way of doing things without NUMA, essentially all your processor sockets tied to the same pool of memory. And of course, you had a bus, which is basically wires on the motherboard that connects your CPUs to your memory. And unfortunately, they’re all traveling the same wires with Numa. If your motherboard supports it in your processor and all that supports it. Then it divides different buses up different wires on the motherboard or associated with different processor sockets. And it’s going to divide the memory up for each processor socket. It still means that essentially your processor still have access to all the memory, but each processor socket gets a designated amount of memory that is connected nearest to that processor socket. And essentially, you get better performance that way. Each processor socket gets its own little roadway, if you will, to your area memory, and that’s what your bus is in regards to Numa. And that’s the common way that people utilize a new. Of course, you do have to have an actual server that supports it. You got to have the hardware. If you’re just, you know, doing these, doing this on a like a client machine like I am for demonstration purposes and you don’t have a physical server to demonstrate this on, it’s a little hard to configure NUMA, but there isn’t really much configuration that’s going to happen here anyway for that.

So as you can see, it’s telling you, though, that I’ve now that I’ve gone up to nine, I would have I would have to have two sockets and then continue to go on up. You’ll notice it’s now up to three. And you’ll notice I can go all the way up to 64 eight sockets here. All right. And then so from there, I could say my virtual machines reserve percentages, as you can also use to control the balance of resources.

So virtual machine reserve percentage of CPU usage you can limit to a maximum of what you want that to be. And then also you get a relative way. The relative weight is a priority.

So you can prioritize the CPU processing of this virtual machine.

So, if it’s fighting another virtual machine for processing load, this what your weight is going to do.

OK. All right.

So you can add you can add, you know, your virtual CPU’s and divide that up and dedicated as much virtual CPUs as you want. And that’s going to essentially give more the ability to do more what’s called multithreading, which is allowing your virtual machines to basically process more threads at a time again, a thread being a mathematical unit that’s being processed, going through the processor at any given time.

So, I can do that point, I can click OK if I go back and take a look at that. You should notice that it’s stayed so thin. Get any kind of error or any of that.

So, it did. It did save. And so at that point, I want to talk now with you how to actually create what are known as CPU groups. Let me show you that.

So here I am on Google, and Microsoft provides us with documentation on how to create CPU groups. It is going to be done through command prompt. And but I want to show you this article that basically walks us through step by step on how to do it.

So, if I go right here to Google, I’m just going to type CPU groups in the word Hyper-V in there, and then you’ll notice that Microsoft provides this little article right here. Virtual machine resource controls if I click on that. This article tells you all about CPU groups. Purpose of CPU groups now, namely, what they’re also going to tell you is that you need to download a tool that’s going to let you work at CPU groups and you can download that tool right here. And so, we can download that tool and then I can put that over on my C drive. And what I generally do is I just copy it over to my C drive and I can just run the command from the C drive.

OK. In this case, they also show some demonstrations of this. If we scroll down a little bit, they put it in a folder called VM slash tools. But you and you could do the same thing if you want.

So, if you want to see what your CPU topology currently is, you can run this command here. CPU groups get the CPU topology and then if you want to see if you have any existing groups which you won’t, by default, you can run this command here. Your CPU groups get groups and then if you want to create a CPU group, they have an example of how to create a CPU group right here. Ultimately, when you create a CPU group. The CPU group will be given a grid number, which they call the Group IV. And this what the group by the would be. And essentially each group you would specify a one or two, three or four at the end of it. And then you can specify your logical processor index. And so when you run this command up here, you can specify how many logical processors you have. And then you can group those together by running this command right here. All right.

Now, if you have hardware to do this, I encourage you to give that a shot. If you got access to the server and multiple processors where you can actually do a lot of this, you can you can try this out the name the main thing to understand here. If you take an exam, you need to know what a super group is and you need to be aware of the CPU group’s command. That is going to be the command that’s going to let you do it.

OK. But I encourage you to kind of read through this article, and if you want to try this out on your own machine, you can go and try some of these commands.

89. Understanding hypervisor scheduling types

I now want to help you understand the concept of hypervisor scheduling types.

So what is hypervisor scheduling types? So Hypervisor has the ability to modify the way your virtual machines will handle the processing of data.

Now, of course, this known as a scheduling type, also known as a scheduling mode.

So scheduling type scheduling mode are both the same same thing now. In the past, we didn’t really have this. This was a we just had what was known as the classic scheduler, or they didn’t even really call it the classic scheduler back in the day. That is called it a scheduler. And it was just the way that there are virtual machines that kind of divide it up workload. When were trying to handle the processing load of our of our virtual machines, however, what changed is when Server 2016 originally came out. They added the ability to do what is known as scheduler types. And so with scheduler types, we are able to adjust which schedule or type we want based on what type of needs we got. And it can better handle the processing or dividing of processing of datamongst our different across our different virtual machines.

OK, so let’s let’s kind of break this down a little bit more. Understand the concept here.

So first off, we know that our, you know, our servers that are going to host virtual machines, they have processors and they have process requires. Those processor cores can be broken up into virtual processors. Virtual CPU’s in the virtual CPU’s become what are called logical processors on our virtual machines.

OK. And so your Hyper-V, the way that it manages things it has, these has what is called a partition, a virtual machine partition. And when you think of that word partition, don’t think of like a driver, a d driver, anything like that, like a file partition. This a partition, which essentially is like a pool of memory and processing power that the virtual machines can be given. And so you have this virtual machine partition that your different virtual machines are all sharing.

OK. And you know, you start out with just what’s known as a single partition that they’re all utilizing at the same time. But when you start working with scheduling types, scheduling types can create different partitions that can they can issue resources over to your virtual machines. You’ve also got what’s called the root partition. A root partition is the underlying partition that that talked directly to your physical hardware itself, and Microsoft recommends that you don’t you don’t actually associate applications or any type of virtual machine directly to the root. Even though it is possible to do that, they don’t recommend you do that. They recommend that you utilize the actual scheduling types that are available inside Hyper-V.

Now again, kind of breaking down virtual processors. A virtual processor is associated with your core and you have what’s called a one to one mapping with your logical processors, which are what your virtual machines actually have.

OK, so you’re logical. Processors are going to essentially be part of the guest operating systems themselves. The guest operating systems are going to see these logical processors as their physical processor.

OK, you’re your guest. Virtual processors can also be scheduled to run in association with Numa Non-uniform Memory Access, which Numallows for our virtual machines to interact with certain processors and the certain processors get to interact with certain memory on the motherboard of the computer. And the reason this good is because in the in the old way of doing things without Numa, your CPU cores all associate with the same pool of RAM and they’re all going on the same bus, which is the same wires that are on your motherboard. But with Numa, you have multiple bus architecture on your motherboard that allows for your virtual machines to interact with the processors. In the process, you have separate RAM that they can use. Keep in mind, the processors can use all the RAM on the motherboard, but there will be a certain area of RAM that they directly talk to themselves, and this gives you a better, better performance out of your virtual machines.

Now, as far as hypervisor scheduler types go, there are three different scheduler types or again, scheduler modes that you can choose from the classic scheduler, the core scheduler and the Root Scheduler. The classic scheduler is the one that we’ve had with Hyper-V since the day it came out.

OK. Years ago.

So this just the standard way of handling it, and to be honest with you in my even Microsoft says this 90 something percent of the time. The classic scheduler is going to be fine. This what you’re going to leave it set to. You’re not going to change anything. This the default. You’re just going to leave hypervisor using the classic scheduler. It’s just going to divide up the processing between your different virtual machines and it becomes sort of first come first serve for processing unless you adjust the weight settings and Hyper-V. The next scheduling type is called the core scheduler.

Now, the core scheduler is one of the new ones, and it was introduced originally with Windows Server 2016 and Windows 10. Six of seven is available in all the higher level operating systems as well. The core scheduler is interesting because it actually provides you with like a little security boundary around your guest operating systems, which is better served for doing what’s called sandboxing. It also allows you to group your virtual machines together for what’s known as SMP, which is symmetric multithreading, which allows multiple threads to be divided amongst your virtual CPU’s. And so this good if you if you essentially needed to pair multiple virtual machines together, maybe for load balancing purposes. You can get slightly better performance out of doing that.

Now, the last one is called the Root Scheduler, and the Root Scheduler is another one that was introduced as one of the newer ones. It actually originated Windows 10 18 03 and is available in all the later operating systems as well.

So this utilizes the root partition and actually what this is geared towards is for sandboxing applications. It’s used with Microsoft’s Windows Defender Application Guard WD AG, which essentially allows applications to run in a what’s called a virtual sandbox so that they are better protected from malware or any type of virus infection or something like that. Getting into those, those are applications.

So this actually kind of interesting because you can turn this on for the whole entire system if you want. It can be utilized with just individual applications, though. And so really, this one’s not as geared towards, you know, utilizing this on a bunch of VMS, you can utilize it in conjunction with VMS, but it’s really geared towards being utilized with applications more than anything else. And that’s the thing I’d like you to remember. If you’re taking the exam, you won’t remember. The Root Scheduler is geared toward specific applications to provide a sandbox, a virtual sandbox for protecting those applications.

Now, how do we actually implement this case? If you want to turn this on for the entire system, if there’s actually a command that you’re going to run in, the command is called BCD Edit Slash Set Hypervisor Scheduler type and then where it says type on the screen layer, you’re just going to either put classic core or root and then you’re going to run that and then at that point, reboot the computer. And at that point, your hypervisor is going to be set to that mode. All right. And that is how you’re going to do that. And that is going to be a command that you’re going to want to remember for the exam for us to give you. If you are taking the exam, give you a test question on what the command is. It’s going to do that. That is the command and that’s what you’re going to want to know for hypervisor scheduling types.