달력

5

« 2024/5 »

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
Windows Server "8" Beta Hyper-V Component Architecture
PDF 파일이 보기가 너무 어려워(너무 커서..;;;) 스냅샷으로 다시 정리함

Hyper-V 관련 향상되거나 추가된 기능
이외에도 Windows Server 8에서 매우 많은 기능이 향상되거나 추가된다.

- 단일 클러스터 당 최대 64 노드, 4,000대의 가상 머신 지원
- 동시에 다수의 실시간 마이그레이션 가능
- 클러스터링 구성 없이 Live Migration 가능
- SAN 스토리지 없이 파일 공유 기반의 클러스터 구성 가능
- 스토리지에 대한 Storage Live Migration 가능
- 호스트 서버에 대해 256 논리 프로세서, 2TB 메모리 지원
- 가상 머신에 대해 32 가상 프로세서(vCPU), 1TB 메모리 할당 가능
- 호스트 간 TCP 통신을 통한 가상 머신 복제(Replication) 기능
- 가상 머신의 네트워크 대역폭 조절 가능
- 가상 머신에 대한 NUMA 지원
- 네트워크 어댑터에 대한 SR-IOV 지원
- 가상 FC 지원, 가상 HBA
- 새로운 디스크 타입(.VHDX) 지원. 안정성 및 성능 향상
- CAU(Cluster-Aware Updating) 기능 추가: 노드들에 대한 패치 관리


(원문)Windows Server "8" Beta Hyper-V Component Architecture Poster
http://www.microsoft.com/download/en/details.aspx?id=29189



Hyper-V Replica
Virtual Machine Replication
Hyper-V Replica is an asynchronous virtual machine replication technology that is included in Windows Server "8" Beta. It is designed for business continuity and disaster recovery. It works with any server, network, or storage vendor. It does not require any shared storage. It enables you to replicate individual or multiple virtual machines. Hyper-V Replica is tightly integrated with Hyper-V and Failover Clustering. You can replicate virtual machines from one computer running Hyper-V at a primary site (the primary server) to another computer running Hyper-V at a Replica site (the Replica server). The Replica server accepts incoming replication traffic from one or more primary servers.

 

Hyper-V Networking
Load Balancing and Failover
Network adapter teaming, also known as NIC teaming or load balancing and failover (LBFO), enables multiple network adapters to be placed into a team interface. This provides bandwidth aggregation and traffic failover, which prevents loss of connectivity in the event of a network adapter failure. Network adapter teaming supports multivendor implementations.


Quality of Service Bandwidth Management
Windows Server "8" Beta includes new Quality of Service (QoS) bandwidth management functionality that allows you to converge multiple types of network traffic through a single network adapter with a predicable level of service to each type. You can configure bandwidth management features through the virtual machine settings or by using Windows PowerShell commands.
To architect bandwidth management, you can specify a maximum and minimum bandwidth limit. These limits allow you to manage bandwidth allocations depending on your type of network traffic.
It is important to note that the new minimum bandwidth feature allows each network service (such as management, storage, live migration, and virtual machine traffic) to get an allocated share of bandwidth when the network bandwidth is heavily utilized and contended. When bandwidth is freely available, each of these network services gets as much bandwidth as required.
There are two mechanisms to enforce minimum bandwidth. You can use QoS software in your server running Hyper-V, or Windows-certified network adapters that support Data Center Bridging (DCB).


Single Root I/O Virtualization
SR-IOV is a standard that allows PCI Express devices to be shared among multiple virtual machines by providing them a direct hardware path for I/O. Hyper-V provides support for SR-IOV-capable network adapters. SR-IOV reduces network latency, reduces CPU utilization for processing network traffic, and increases network throughput.
SR-IOV-capable networking devices have hardware surfaces called virtual functions that can be securely assigned to virtual machines-bypassing the virtual switch in the management operating system for sending and receiving data. Policy and control remains under the management operating system.
SR-IOV is fully compatible with live migration because software-based networking is available at all times.
During live migration, virtual functions are temporarily removed. This enables live migration using network adapters from different vendors, or in a situation where SR-IOV is not available on the destination computer.


Hyper-V Virtual Switch
In Windows Server "8" Beta, the Hyper-V virtual switch is extensible. This allows new capabilities to be added to the virtual switch so that you can view and manage the traffic on your server running Hyper-V. This includes traffic generated between virtual machines running on the same computer.
Using the extensible capabilities in a virtual switch, Microsoft partners can add their own monitoring, filtering, and forwarding functionality. Any extensions that are created are implemented using Network Driver Interface Specification (NDIS) filter drivers or Windows Filtering Platform (WFP) callout drivers.
You can use NDIS filter drivers to monitor or modify network packets in Windows. You can use WFP to create firewall or intrusion detection functionality.


Hyper-V Virtual Machine Mobility
Live Migration Without Shared Storage
Live migration without shared storage (also known as "Shared Nothing Live Migration") enables you to migrate virtual machines and their associated storage between servers running Hyper-V within the same domain. This kind of live migration uses only an Ethernet connection.
 

Storage Migration
Hyper-V storage migration enables you to move virtual machine storage (virtual hard disks) without downtime. This enables new servicing scenarios. For example, you can add more physical disk storage to a non-clustered computer or a Hyper-V cluster and then move the virtual machines to the new storage while the virtual machines continue to run.


Live Migration with SMB Shared Storage
Live migration with Server Message Block (SMB) shared storage enables you to move virtual machines between servers running Hyper-V within the same domain while the virtual machine storage remains on the SMB-based file server. Concurrent live migrations are supported. This kind of live migration does not require configuration of a failover cluster.
 

Live Migration with Failover Clusters
Hyper-V live migration with failover clusters (first introduced in Windows Server 2008 R2) enables you to move running virtual machines from one cluster node running Hyper-V to another node, without any disruption or perceived loss of service. Live migration is initiated by the administrator and is a planned operation.


Hyper-V Storage
Virtual Fibre Channel for Virtual Machines
Hyper-V virtual Fibre Channel for virtual machines enables virtual machines to access Fibre Channel-based storage. This feature allows you to virtualize workloads that require Fibre Channel storage-and also allows you to cluster guest operating systems in virtual machines using Fibre Channel.


New Virtual Hard Disk Format
VHD is a virtual hard disk file format that enables you to represent a physical hard disk drive in a file, and it is used as the hard disk of a virtual machine. Hyper-V in Windows Server "8" Beta contains an update to the virtual hard disk format called VHDX.
 

Hyper-V Using Server Message Block (SMB)
Hyper-V can store virtual machine files (configuration files, virtual hard disk files, and snapshots) on file servers using Server Message Block (SMB) 2.2. This is supported for both non-clustered and clustered servers running Hyper-V where file storage is used as shared storage for the failover cluster.


Hyper-V and Failover Clustering
Clustered Virtual Machines for High Availability



Hyper-V Scalability
Physical Hardware and Virtual Machine Scalability
Hyper-V in Windows Server "8" Beta provides enhanced enterprise hosting capabilities, with expanded support for both physical and virtual processors, and physical and virtual memory.
It also makes it easier to virtualize high-performance workloads by supporting the configuration of large, high-performance virtual machines.


NUMA and Virtual Machines
NUMA (Non-Uniform Memory Access) is a multiprocessor architecture that groups memory and processors into compute nodes. The time required for a processor to access memory within a node is faster than the time required to access memory across nodes. Hyper-V supports projecting a virtual NUMA topology within a virtual machine, which enables virtual machines with multiprocessors to scale better.
The guest operating system and applications can take advantage of any NUMA performance optimizations. By default, the virtual NUMA topology within a virtual machine is optimized to match the NUMA topology in the server running Hyper-V.

 


:
Posted by 커널64