This feature has been a requirement for independent hardware vendors IHVs to enter the server network adapter market, but until now network adapter teaming has not been included in Windows Server operating systems. NIC Teaming enables you to team network adapters of different speeds and from different manufacturers.
It is recommended that you do not team network adapters of different speeds. The reason for this is that while it is a supported configuration, if the higher speed adapter fails, performance on the slower adapter will be severely impacted.
Network adapter teaming in Windows Server also works within a virtual machine. This allows a virtual machine to have virtual network adapters that are connected to more than one Hyper-V switch and still have connectivity even if the network adapter under that switch gets disconnected. Thus, it cannot be protected by a team that is under a Hyper-V switch. With the virtual machine teaming option, an administrator can set up two Hyper-V switches, each connected to its own SR-IOV-capable network adapter.
At that point:. Then, in the event of a network adapter disconnect, the virtual machine can fail over from the primary virtual function to the backup virtual function.
Alternately, the virtual machine might have a virtual function from one network adapter and a non-virtual function network adapter from the other switch. If the network adapter that is associated with the virtual function gets disconnected, the traffic can fail over to the other switch without loss of connectivity.
NIC Teaming is compatible with all networking capabilities in Windows Server with three exceptions:. Therefore, it is not possible for the network adapter team to look at or redirect the data to another path in the team. Network adapter teaming requires the presence of at least one Ethernet network adapter, which can be used for separation of traffic using VLANs.
All modes that provide fault protection through failover require at least two Ethernet network adapters. The Windows Server implementation supports up to 32 network adapters in a team.
For more information on Windows Server load balancing and failover, please see Load Balancing and Failover Overview. In this design pattern, the Hyper-V cluster providing the compute component for the cloud infrastructure is separate from the storage cluster.
When you separate the compute from the storage cluster, you have the opportunity to scale compute capacity and storage capacity separately. This provides you more flexibility when designing your scale units for compute and storage.
In this pattern, two 10 GbE adapters on the compute cluster are teamed which support management, cluster, live migration, and storage traffic. The storage configuration on the storage cluster can connect to any type of block storage solution. A Hyper-V host failover cluster is a group of independent servers that work together to increase the availability of applications and services.
The clustered servers called nodes are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service a process known as failover. In case of a planned migration called live migration , users experience no perceptible service interruption.
The host servers are one of the critical components of a dynamic, virtual infrastructure. Consolidation of multiple workloads onto the host servers requires that those servers be highly available. Windows Server provides advances in failover clustering that enable high availability and live migration of virtual machines between physical nodes. Some of the new features which you will need to include in your private cloud design decisions for host cluster members include:.
It is important to note that a private cloud infrastructure does not require failover clustering. Failover clustering provides high availability for stateful applications that are not specifically designed to work on and support cloud capabilities. Specific cloud capabilities are targeted at the stateless applications of the future. Prior to Windows Server , Live Migration of virtual machines from one host to another required failover clustering; thus a failover cluster defined the scope of virtual machine and workload mobility.
However, Windows Server introduces what is known as the "shared nothing" Live Migration feature. Shared nothing Live Migration enables the cloud service provider to move virtual machines from one Hyper-V host to another without requiring failover clustering or shared storage.
With only a network cable or even wireless connection the virtual machine and its virtual disk and configuration files can be moved from one machine to another. Private cloud service providers could take advantage of this capability to run virtual machines hosting stateless workloads and place virtual machines on specific hosts based on their fabric controller of choice. This design guide focuses on implementing failover clustering in the private cloud infrastructure because the Windows Server failover clustering feature adds many vital capabilities for the management, monitoring and control of the overall cloud solution and is tightly integrated with Windows Server Hyper-V technologies.
In addition, we believe that, for at least the near term that stateful applications will represent the most common workload running in a private cloud. In a Microsoft Private Cloud infrastructure, we recommend two standard design patterns.
The server topology should consist of at least two Hyper-V host clusters. The first should have at least two nodes, and will be referred to as the management cluster. The second, and any additional clusters, will be referred to as fabric host clusters. The second cluster might be the compute cluster, and the third cluster might be the storage cluster. Alternatively, the second cluster can be a combined computer and storage cluster that is either symmetric or asymmetric.
In the current context, a symmetric cluster is a cluster where each member of the cluster is directly attached to storage. Conversely, an asymmetric cluster has some members attached to storage and others unattached to storage. Typically, the unattached cluster members are performing the compute function actually running the virtual machines and the attached members of the cluster are acting as scale-out file servers that host the virtual machine files for the compute nodes.
In some cases such as smaller-scale scenarios or specialized solutions, the management and fabric clusters can be consolidated onto the fabric host cluster. Special care has to be taken in that case to ensure resource availability for the virtual machines that host the various parts of the management stack. There are significant security issues with this design, so assiduous attention to traditional and cloud-based security measures is mandatory. Each host cluster can contain up to 64 nodes.
However, for file server clusters running the Windows Server scale-out file server role that host VHDX files used by the computer cluster, there is an informal 8-node limitation. The compute cluster can host its own storage, or can use SMB 3. The 8-node informal limitation is based on what has been tested as of the writing of this document. You can exceed the 8-node limitation and customer support services will work with you in identifying and resolving issues should they arise.
A variety of host cluster traffic profiles or types are used in a Hyper-V failover cluster. The network requirements to support these traffic types enable high availability and high performance. The Microsoft cloud infrastructure configurations support the following Ethernet traffic profiles:. Management network traffic. The management network is required so that hosts can be managed to avoid competition with guest and other infrastructure traffic needs.
The management network provides a degree of separation for security and ease of management purposes. This network is used for remote administration of the host, communication to management systems System Center agents , and other administrative tasks.
For all these storage connections, an MPIO configuration with two independent physical ports is required. In the case of SMB 3. However, storage connectivity failures and other non-failure scenarios sometimes prevent a given node from communicating directly with the storage device.
Live migration traffic. During live migration, the contents of the memory of the virtual machine running on the source node needs to be transferred to the destination node over a LAN connection. Large virtual machines can consume many gigabytes of memory that need to be transferred over the network. To provide a high-speed transfer, a dedicated, redundant, 10 Gbps live migration network is required.
This significantly reduces the time required to evacuate the virtual machines off a host with zero downtime during maintenance or Windows updates. The time it takes to complete an evacuation of a cluster member depends on the total amount of memory consumed by running virtual machines on that system and the amount of bandwidth available on the Live Migration network.
Tenant traffic. Tenant traffic is dedicated to virtual machine LAN traffic. The tenant traffic can use two or more 1 GB or 10 GB networks using network adapter teaming, or virtual networks created from shared network adapters.
You can implement one or more dedicated virtual machine networks. The amount of bandwidth required on the tenant network may be less than what you require on any of the infrastructure networks, depending on the type of workloads you expect to support on your private cloud.
One way to determine the bandwidth you want to make available on your tenant network is to define your network service classes and then determine what is the total amount of bandwidth required for you to meet the SLA for your network bandwidth defined service classes for all VMs running on your host.
When you look at each of the traffic profiles presented here, you can categorize each of them into one of two categories:. Tenant traffic is defined by traffic to and from virtual machines running within the private cloud infrastructure. There are some security considerations that you need to address which extend past the isolation methods we've talked about until now. During the process of a Live Migration, tenant data, by default, is moving unencrypted over the Live Migration network.
There is a reasonable chance that this data contains private information that might be of interest to intruders. If the physical or logical infrastructure of the Live Migration network were to be compromised, the Live Migration traffic could be accessible in an unencrypted format to the intruder.
Because of this, we recommend that you use IPsec to secure the connections between hosts on the Live Migration network. These are:. Dedicated network adapters for each traffic profile. With new features included in Windows Server , this is no longer considered the preferred design. However, it simplifies upgrading the current Windows Server R2 to a Windows Server infrastructure.
Dedicated network adapters for cloud infrastructure traffic and tenant traffic. In this design pattern, separate adapters and networks are used for the cloud infrastructure traffic and the tenant traffic. This provides the required isolation between infrastructure and tenant traffic, and lessens the impact of tenant traffic on overall bandwidth availability.
Windows Server QoS policies can be used to provide minimum and maximum bandwidth availability for each cloud infrastructure traffic profile. No dedicated adapters for any traffic profile. In this design pattern, all traffic moves through the same network adapters over the same physical network. Infrastructure traffic and tenant traffic share the same physical adapter or adapter team. Hyper-V QoS policies are applied so that each traffic profile has a guaranteed amount of bandwidth.
This pattern requires that infrastructure traffic flow through virtual network adapters that are created on the same Hyper-V virtual switch through which tenant traffic also flows. The advantage of this converged networking pattern is that it is simpler to manage, is more cost effective and enables you to take advantage of security and performance capabilities included with the Hyper-V virtual switch.
Note that the conversation to this point has been focused on traffic patterns and network design for compute node cluster traffic. If you have chosen to separate your compute and file server clusters so that you can scale compute and storage separately, then you will need to consider what traffic profiles need to be defined on the file server cluster. There will be no Live Migration traffic profile on the storage cluster. But you may want to define a file server network traffic profile, which delineates the path that connects the compute node to the file server node using the SMB 3.
Standardization is a key tenet of private cloud architectures. This also applies to virtual machines. A standardized collection of virtual machine templates can both drive predictable performance and greatly improve capacity planning capabilities.
These templates also provide the foundation for your private cloud's service catalog. The service catalog can also be used to help you determine how you size the hosts in your private cloud as well as the scale units you might wish to define. As an example, the table below illustrates what a basic virtual machine template library might look like.
This section discusses the different types of Hyper-V disks. Note that while in the past Microsoft recommended using only fixed VHDs for production, significant improvements to the virtual disk format. VHDX have made using dynamically expanding disks a viable format for production use.
Therefore, for performance reasons, you can use either fixed or dynamically expanding disks in your private cloud infrastructure. Dynamically expanding virtual hard disks provide storage capacity as needed to store data. The size of the VHDX file is small when the disk is created and grows as data is added to the disk.
In Windows Server the size of the VHDX file will shrink automatically when data is deleted from the virtual hard disk. Dynamically expanding disks can be provisioned very quickly and can be used as part of your thin provisioning scheme. Fixed virtual hard disks provide storage capacity by using a VHDX file that is in the size specified for the virtual hard disk when the disk is created. The size of the VHDX file remains 'fixed' regardless of the amount of data stored, similar to a physical hard disk.
By allocating the full capacity at the time of creation, fragmentation at the host level is not an issue fragmentation inside the VHD itself must be managed within the guest. A disadvantage of fixed sized disks is that they can take a long time to be provisioned, the time depending on the size of the fixed size disk. Fixed disks can provide incremental performance improvement. You should weigh the advantages and disadvantages of using dynamically expanding disks versus fixed sized disks by considering whether the disk space advantages conferred by the dynamically expanding disks outweigh the incremental performance improvement in fixed sized disks.
Differencing virtual hard disks provide storage to enable you to make changes to a parent virtual hard disk without altering that disk. The size of the VHD file for a differencing disk grows as changes are stored to the disk. Given that there is minimal state required on Hyper-V compute cluster nodes, each host can have its own differencing disk. When updates to the operating system are required, the parent disk is serviced and the compute nodes are rebooted after virtual machines are migrated away from the compute node host.
A new differencing disk is created at that point. This enables centralized management of a single golden image that can be used for each compute cluster host and significant reduces the amount of storage required to host the cluster node virtual disk files. Hyper-V enables virtual machine guests to directly access local disks or SAN LUNs that are attached to the physical server without requiring the volume to be presented to the host server.
The virtual machine guest accesses the disk directly utilizing the disk's GUID without having to utilize the host's file system. Given that the performance difference between fixed-disk and pass-through disks is negligible in Windows Server , the decision is now based on manageability.
For instance, if the data on the volume will be very large hundreds of gigabytes or terabytes , a VHDX is hardly portable at that size given the extreme amounts of time it takes to copy.
Also, bear in mind the backup scheme. With pass-through disks, the data can only be backed up from within the guest. In addition, virtual machine portability will be limited. Since there is no VHDX file, there is no dynamic sizing capability or snapshot capability.
Creating a guest cluster gives you the ability to failover or migrate your applications independently of the guest operating system and provides you the ability to decouple the application from the guest operating system in a way similar the abstraction that the guest operating system gets from virtualization.
In the case of the guest operating system itself, you are decoupling the guest operating system from the hardware. With a guest cluster you decouple the application from the virtual machine. This gives you much greater flexibility and increased uptime in the event that the virtual machine has a failure in which a migration or restart of the VM will not solve.
Windows Server provides support for guest clustering and access to shared storage required for guest clustering with:. This is mainly used for access to large volumes, volumes on SANs that the Hyper-V host itself is not connected to, or for guest-clustering.
However, given that guests can boot from SMB 3. A new feature in Windows Server allows you to connect directly to Fibre Channel storage from within the guest operating system that runs in a virtual machine. This feature makes it possible to virtualize workloads and applications that require direct access to Fibre Channel-based storage. It also makes it possible to configure clustering directly within the guest operating system.
This feature makes HBA ports available within the guest operating system. This is mainly used for access to large volumes, volumes on SANs which the Hyper-V host itself is not connected to, or for guest-clustering. Hyper-V guests support two types of virtual network adapters: synthetic and emulated. Synthetic devices require the Hyper-V integration services be installed within the guest. Emulated adapters are available to all guests even if the integration services are not available.
They are much slower performing, and should only be used if synthetic is unavailable. When you configure a virtual machine to perform a PXE boot for installation, it will initially configure itself to use an emulated adapter.
If you wish to take advantage of the synthetic adapter, you will need to change the adapter type after installation completes. You can create many virtual networks on the server running Hyper-V to provide a variety of communications channels. For example, you can create networks to provide the following:. Communications between virtual machines only. This type of virtual network is called a private network.
Communications between the host server and virtual machines. This type of virtual network is called an internal network. Communications between a virtual machine and a physical network by creating an association to a physical network adapter on the host server.
This type of virtual network is called an external network. A logical processor is defined as a processing core seen by the host operating system or parent partition.
In the case of Intel Hyper-Threading Technology, each thread is considered a logical processor. Therefore a 16 logical-processor server supports a maximum of virtual processors. That would in turn equate to single-processor virtual machines, 64 dual-processor virtual machines, or 32 quad-processor virtual machines.
The or virtual to logical processor ratios are maximum supported limits; it is recommended that lower limits be utilized than the maximum. In Windows Server , there are no hard or soft coded virtual to logical processor ratios. The recommendation is use dual socket servers with the highest core density available. If more nodes are required to provide required processing power, you can add more nodes to your scale unit. Historical trend analysis will be useful in this context.
Trends and thresholds are more important than specific ratios. Throughout this document you have been presented with hundreds of potential options for how to design a private cloud infrastructure from the storage, networking and compute perspectives. The possible permutations are virtually limitless and you might find it a bit challenging to determine which options would work best for you, and which options work best together. You may even be interested in whether there are tested private cloud infrastructure design patterns that you can use to get your infrastructure started.
To help you in your testing and evaluation of a Microsoft private cloud infrastructure, we have worked on developing and testing three private cloud infrastructure design patterns that you may want to adopt in your own private cloud environment. These design patterns:. Provide three options that you may want to choose from, where many of the design decisions have been made for you. Help you to better understand how you can take advantage of the many improvements in Windows Server to get the most out of your private cloud infrastructure investments.
It is important to note that within the context of these three configurations, the term "converged" refers to the networking configuration. A converged networking design consolidates multiple traffic profiles onto a single network adapter or team and network and then uses a variety of software constructs to provide the required isolation and Quality of Service for each profile.
Finally, be aware that while these three cloud infrastructure design patterns have made a number of decisions regarding storage, networking and compute functions for you, they do not span the entire gamut of options made available to you through Windows Server You may want to begin with these patterns and build on top of them by taking advantage of other Windows Server platform capabilities.
The Non-Converged Data Center Configuration cloud infrastructure design pattern is aimed at allowing for easy upgrade of an existing cloud infrastructure based on networking design decisions and hardware configuration recommendations for a Windows Server R2 infrastructure. The Non-Converged Data Center Configuration focuses on the following key requirements in the areas of networking, compute and storage:.
You have an existing investment in separate networks based on the recommended configuration of Hyper-V in Windows Server R2 and you require that physical network traffic segmentation be kept in place to avoid re-architecting your network. You require that each traffic type is dedicated to a specific adapter.
You require that virtual machine workloads have access to the highest network performance possible. You require that each member of the Hyper-V failover cluster be able to access the shared storage.
You require that you are able to repurpose previous Hyper-V hardware that ran Windows Server R2 servers. This requirement is met by reusing previous hardware and making a single change to the hardware configuration, which is to add a 10 GB network adapter that supports SR-IOV.
With the right encryption and password management in place, the wireless portion of the network can be as secure as the wired.
Wired networks are faster, more secure, and reliable than wireless networks. They also reduce the chance of outside interference. At the same time, they require a bit more work to set up and the hardware is more expensive. If your small business has lots of floor space, such as a manufacturing facility, you may experience signal degradation if there are very long cables between devices. You can often improve the signal by using an Ethernet repeater to strengthen the signal.
To begin, follow the procedure for the version of Windows running on the device that you want to connect to your network. All of your devices don't need to run the same version of Windows to be a part of your business network. To begin, run an Ethernet cable from the router or hub to each device that you want to connect to the network. Otherwise, most routers come with instructions and a setup CD that will help you set them up.
If your home or office is wired for Ethernet, set up the devices in rooms that have Ethernet jacks, and then plug them directly into the Ethernet jacks. A firewall is hardware or software that helps control the spread of malicious software on your network and helps to protect your devices when you use the Internet.
Don't turn off Windows Firewall unless you have another firewall turned on. Turning off Windows Firewall might make your device and network vulnerable to damage from hackers. To set up a firewall, follow the instructions:. Windows Firewall automatically opens the correct ports for file and printer sharing when you share content or turn on network discovery. If you're using another firewall, you must open these ports yourself so that your device can find other devices that have files or printers that you want to share.
To find other devices running earlier versions of Windows, and to use file and printer sharing on any version of Windows, open these ports:. If the devices running Windows 7 are connected to either a hub or a switch using a cable, then they're already on the network, and ready to use. If you had to change the workgroup name, you're prompted to restart your device. Restart the device, and then continue with the following steps.
Windows can automatically detect and install the correct network adapter software for you. To check whether your device has a network adapter, follow the instructions. Tap or click Turn on Windows Firewall under each type of network that you want to help protect, and then tap or click OK.
If the devices running Windows Vista are connected to either a hub or a switch using a cable, then they're already on the network, and ready to use.
To find other devices running Windows XP or earlier versions of Windows, and to use file and printer sharing on any version of Windows, open these ports:. If you have devices running Windows XP, you may need to do a little more work to add those devices. Now that you've decided to invest in a wireless network for your business, you have to select a network standard and set up your network. Wireless networks WLANs don't require much in the way of network infrastructure. Many small business owners select wireless networking because it's flexible, inexpensive, and easy to install and maintain.
You can use a wireless network to share Internet access, files, printers, file servers, and other devices in your office. Once you have the network set up, you can enable sharing, set permissions, and add printers and other devices.
The most common wireless network standards are Prices vary for each standard as do data transfer rates. Typically the faster the data transfer rate, the more you pay.
In general, data transfer rates for each standard work as follows:. The transfer times listed are under ideal conditions. They aren't necessarily achievable under typical circumstances because of differences in hardware, web servers, network traffic, and other factors. A wireless router sends information between your network and the Internet by using radio signals instead of wires.
You should use a router that supports faster wireless signals, such as For the best results, put your wireless router, wireless modem router a DSL or cable modem with a built-in wireless router , or wireless access point WAP in a central location in your office.
If your router is on the first floor and your devices are on the second floor, put the router high on a shelf on the first floor. If your ISP didn't set up your modem, follow the instructions that came with your modem to connect it to your device and the Internet. If you're using cable, connect your modem to a cable jack.
Protect your router by changing the default user name and password. Most router manufacturers have a default user name and password on the router in addition to a default network name. Someone could use this information to access your router without your knowledge. Check the information that was included with your device for instructions. To connect to a wireless network, your device must have a wireless network adapter. Make sure that you get the same type of adapters as your wireless router.
The type of adapter is marked on the package with a letter, such as G or A. Windows Windows. Most Popular. New Releases.
Desktop Enhancements. Networking Software. Trending from CNET. Microsoft Windows Free. Update: NetDiag, net diagnostic tool. Calendar Free. Place a monthly calendar on your Windows desktop. Train from your home or office If you have high-speed internet and a computer you can likely take this class from your home or office. Overview At the end of this five-day course, students will learn how to design an Active Directory Infrastructure in Windows Server Students will learn how to design Active Directory forests, domain infrastructure, sites and replication, administrative structures, group policies, and Public Key Infrastructures.
Students will also learn how to design for security, high availability, disaster recovery, and migrations. Audience The primary audience for this course includes Windows Server administrators who want to become Windows Server enterprise administrators and move into the role of designing Active Directory Domain Services AD DS environments.
The primary audience for this course also includes Information Technology IT professionals, including Windows Server and Windows Server enterprise administrators who want to become Windows Server enterprise administrators.
0コメント